Search Results

Search found 16455 results on 659 pages for 'hosts allow'.

Page 168/659 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • MinGW MSYS ssh error: Could not create directory '/home/<username>/.ssh'

    - by SoldOut
    I have just installed a fresh MinGW installation on Windows 7 64bit using the Graphical User Interface Installer (the recommended approach) following the instructions given here and keeping the default options (i.e. installation in C:\MinGW) - hopefully without missing any steps or messing things up in any way. However, when running the ssh command, I get the following error: C:\Users\Diablossh username@host Could not create directory '/home/username/.ssh'. The authenticity of host 'username@host (host ip here)' can't be established. RSA key fingerprint is (fingerprint here). Are you sure you want to continue connecting (yes/no)? yes Failed to add the host to the list of known hosts (/home/username/.ssh/known_hosts). So, I basically have to confirm the connection every time. Why does this happen and how do I fix it?

    Read the article

  • Reporting Services 2008 R2 export to PDF embedded fonts not shown

    - by Gabriel Guimarães
    Hi, I have installed a font on the server that hosts Reporting Services 2008 R2, after that I've restarted the SSRS service, and sucessfully deployed a report with a custom font. I can see it using the font on the web, when I export it to Excel, however on PDF the font is not visible. If I click on File - Properties - Fonts. I'm presented with a screen that shows me a list of fonts on the PDF. There's an a icon with Helvetica-BoldOblique Type: Type 1 Encoding: Ansi Actual Font: Arial-BoldItalicMT Actual Font Type: TrueType The second one is a double T icon with the font I'm using and a (Embedded Subset) sufix. Its a True Type font and ANSI encoded. However the text is not using this embeded font, If I select the text and copy to a word document (I have the font installed) I can see the text in the font, however not on PDF, what's wrong here?

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • Debian 5 server is randomly shutting down.

    - by revofreak
    My debian 5 vps is suffering from random shutdowns. I reinstalled it several times, the hosts moved me to a different physical box, check the install image and said everyone else also uses it and is fine. Heres the output from syslog Mar 27 00:19:19 noobintraining-1 -- MARK -- Mar 27 00:32:01 noobintraining-1 shutdown[18142]: shutting down for system halt Mar 27 00:32:06 noobintraining-1 init: Switching to runlevel: 0 Mar 27 00:32:06 noobintraining-1 xinetd[15907]: Exiting... Mar 27 00:32:07 noobintraining-1 named[15865]: received control channel command 'stop -p' Mar 27 00:32:07 noobintraining-1 named[15865]: shutting down: flushing changes Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on 127.0.0.1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: stopping command channel on ::1#953 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on ::#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 127.0.0.1#53 Mar 27 00:32:07 noobintraining-1 named[15865]: no longer listening on 89.238.172.132#53 Mar 27 00:32:07 noobintraining-1 named[15865]: exiting Mar 27 00:32:07 noobintraining-1 exiting on signal 15 Any help is most appreciated!

    Read the article

  • Apache not using the right SSL certificate [on hold]

    - by user2420318
    In my apache2 setup, I have one VirtualHost for my main site, and another for a static content site, like downloads, css, etc. I have ssl certificates for both, and the static content one is under a subdomain of the main site. I have configured the four virtualhosts altogether, as both sites need SSL ones as well. When I only had 1 SSL site, everything was OK, but now with the second, the first site uses the second site's certificate, even though it is told specifically to use its own in the VirtualHost section. I honestly have no idea why apache would do this. Any ideas? I have a feeling there may be some default/global setting or something that are set for some odd reason. I am using different IPs for the Virtual hosts.

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • Apache only transferring partial content from a Samba share

    - by thaBadDawg
    I have an Apache server running on CentOS 5.3. It currently hosts 12 sites with no known issues. (I say this to point out that up to this point my Apache installation has performed flawlessly) I'm adding a new site where the DocumentRoot of the new VirtualHost is a Samba share. When at the command line of the server I can cp video.m4v ~ and the whole file is copied properly to my home directory. But when I try to access the file from IE/Firefox/Safari/Chrome it only passes back a partial result of 33k. The same thing is happening with my image and audio files. If I make the files local to the server by copying them from the share and then serving them up then the files transfer. Any ideas?

    Read the article

  • Burn .srt subtitles to AVC encoded video (transcoding with hardsubs) [closed]

    - by Saxtus
    Possible Duplicate: How do I hard code a movie with subtitles? I am looking for a software (or combination of software) that will allow me to hard burn subtitles from an .srt file, that has italics and bold typefaces, to an H.264/AVC encoded video so it can played from a desktop player that can't display external subtitles correctly. Ideally it could use Directshow as input as DirectVobSub makes nice job showing the subtitles as they should (allowing me to globally adjust font and size). CUDA use, to speed up encoding, will be great but not necessary. Video source is also H.264/AVC encoded. Audio is AC-3 5.1 and should be retained too but I have no problem re-muxin it later as long as it keeps synced. Until now I've unsuccessfully tested: Avisynth 2.58 Unable to make Direcvobsub to launch through it TextSub() command renders subtitles with fixed font/size and doesn't decode tags Malformed audio TMPGEnc 4.0 XPress 4.7.4.299 Audio downmixed to 2.0 Importing of subtitles doesn't decode tags Badaboom 1.2.1.7 No importing of subtitles at all SUPER © 2010.build.37 "Directshow decode" has similar effect as Avisynth above Other modes doesn't appear to allow any subtitles in Thank you.

    Read the article

  • Simulating an UNC path with a leading dot

    - by Uwe Keim
    Being a C# .NET Windows Forms developer, some customers are running our applications on an Apple OS X Mac inside a Parallels virtual machine. Parallels presents host folders to the guest Windows as UNC paths with a leading dot like \\.psf\Home\Some\More\Folders Now an application of us cannot handle the leading dot correctly when accessing files from these kind of shares ("Invalid URI, cannot analyze host name" exception). I want to debug and fix this issue, unfortunately I do have no Mac and Parallels around here to test it. My question is: Is there a way to "simulate" this kind of share on a normal Windows server or client so that I'll be able to debug my application with Visual Studio? What I tried so far: I already tried to edit my HOSTS file to contain an entry like # ... 127.0.0.1 .psf # ... but Windows just seems to not recognize the share at all.

    Read the article

  • Apache2 shared server: default webpage

    - by Eamorr
    Greetings, I have an apache2 server with 4 domain names point to my server's single IP address. When I type in www.site1.com it serves pages from /home/eamorr/site1/index.php Same for www.site2.com, www.site3.com and www.site4.com However, when I type in to the address bar of a browser without the www, it always redirects to site1.com! i.e. site1.com - site1.com site2.com - site1.com site3.com - site1.com site4.com - site1.com How do I configure apache to do the following: site1.com - site1.com site2.com - site2.com site3.com - site3.com site4.com - site4.com Here is my default config: ServerAdmin [email protected] ServerName www.site1.com DocumentRoot /home/eamorr/sites/site1.com/www DirectoryIndex index.php index.html <Directory /home/eamorr/sites/site1.com/www> Options Indexes FollowSymLinks MultiViews Options -Indexes AllowOverride all Order allow,deny allow from all php_value session.cookie_domain ".site1.com" #Added by EOH for redirection RewriteEngine on RewriteRule ^([^/.]+)/?$ driver.php?uname=$1 [L] </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined I'd like to look at the domain name and then redirect to www.sitex.com. Is there an Apache rule to do this? I hope someone can help. My SysAdmin/apache2 config skill aren't the best. Many thanks in advance,

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Apache showing 500 error during Active Directory LDAP authentication

    - by Tyllyn
    I have Apache (on Windows Server) set up to authenticate one directory through Active Directory. Config settings are as follows: <LocationMatch "/trac/[^/]+/login"> Order deny,allow Allow from all AuthBasicProvider ldap AuthzLDAPAuthoritative Off AuthLDAPURL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN trac@<dc-redacted>.local AuthLDAPBindPassword "<password-redacted>" AuthType Basic AuthName "Protected" require valid-user </LocationMatch> Watching, Wireshark, I see the following get sent through when I visit the page: To the AD server: bindRequest(1) "trac@<dc-redacted>.local" simple And from the AD server: bindResponse(1) success I'm assuming this means that the auth was successful... but Apache doesn't think so. It returns a 500 server to me. Apache logs show the following: [Thu Nov 18 16:21:12 2010] [debug] mod_authnz_ldap.c(379): [client 192.168.x.x] [7352] auth_ldap authenticate: using URL ldap://<ip-redacted>:3268/cn=Users,OU=MyBusiness,DC=<dc-redacted>,DC=local?sAMAccountName?sub?(objectClass=*), referer: http://192.168.x.x/trac/Trac/login [Thu Nov 18 16:21:12 2010] [info] [client 192.168.x.x] [7352] auth_ldap authenticate: user authentication failed; URI /trac/Trac/login [ldap_search_ext_s() for user failed][Filter Error], referer: http://192.168.x.x/trac/Trac/login Now, that log file shows a failed auth for a blank user. I am confused. Any idea what I am doing wrong... and how I can get the Apache authentication working? :) Thanks!

    Read the article

  • Weird unexpected image compression on a web server running Apache on Ubuntu?

    - by Billy Bob Thornton
    I have a weird problem on my production web server running Apache on Ubuntu: it compresses my images thereby dramatically lowering their quality! Actually I have two virtual hosts running, each located in a different folder. Wether I display .gif images by navigating on the two sites, or acceding them directly by their url, their size and quality are invariably degraded. I tried with three different browsers: same problem. Using them on other sites on the Web: no problem. Of course I disabled mod_deflate on the server (which should not compress images anyway), but the phenomenon remains. On my local développement server, running the same configuration, everything is Ok. Now I'm completely lost! For the record, my configuration: Ubuntu 10.04, Apache 2, Php 5.

    Read the article

  • 1and1 ssh - connection refused

    - by kitensei
    I'm having troubles connecting through SSH to my 1&1 account. When I try to connect with command userXXX@host -p22 -vv I have the following output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to mySite.com [ip_here] port 22. debug1: connect to address ip_here port 22: Connection refused Moreover, once I try to connect through SSH and it fails, even the HTTP access is dead, I cannot access the website through explorer anymore :/ please help < I'm running ubuntu 11.10 EDIT: don't know if it can help, here's the .htaccess of the 1and1 server Options +Indexes Satisfy any Order Deny,Allow Allow from 212.227.X.X Deny from all RemoveType .html .gif AuthType Basic AuthName "Access to /logs" AuthUserFile /kunden/homepages/43/d376072470/htpasswd Require user "user_here" and sftp.log: Mar 26 09:21:24 193.251.X USER_HERE Connection from 193.251.X port 51809 Mar 26 09:21:30 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:39 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:41 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:45 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 09:23:57 193.251.X USER_HERE Failed password for USER_HERE from 193.251.X port 51809 ssh2 Mar 26 10:53:36 212.227.X tmp64459736-3228 Connection from 212.227.X port 23275 Mar 26 10:53:36 212.227.X tmp64459736-3228 Accepted password for tmp64459736-3228 from 212.227.X port 23275 ssh2 Mar 26 11:53:37 212.227.X tmp64459736-3228 Connection closed by 212.227.X Mar 26 18:58:17 212.227.X tmp64459736-5363 Connection from 212.227.X port 23353 Mar 26 18:58:17 212.227.X tmp64459736-5363 Accepted password for tmp64459736-5363 from 212.227.X port 23353 ssh2 Mar 26 19:53:36 212.227.X tmp64459736-8525 Connection from 212.227.X port 5166 Mar 26 19:53:36 212.227.X tmp64459736-8525 Accepted password for tmp64459736-8525 from 212.227.X port 5166 ssh2 Mar 26 19:58:17 212.227.X tmp64459736-5363 Connection closed by 212.227.X

    Read the article

  • Network vulnerability and port scanning services

    - by DigitalRoss
    I'm setting up a periodic port scan and vulnerability scan for a medium-sized network implementing a customer-facing web application. The hosts run CentOS 5.4. I've used tools like Nmap and OpenVAS, but our firewall rules have special cases for connections originating from our own facilities and servers, so really the scan should be done from the outside. Rather than set up a VPS or EC2 server and configuring it with various tools, it seems like this could just be contracted out to a port and vulnerability scanning service. If they do it professionally they may be more up to date than something I set up and let run for a year... Any recommendations or experience doing this?

    Read the article

  • Industry Standard DNS & Authentication?

    - by James Murphy
    I'm just curious as to what is considered industry standard when it comes to doing DNS and authentication on an environment with mainly linux machines? Do people use Windows DNS & Windows AD to do it all if they have at least one windows server (well - alot might, but should they)? Does ANYONE use hosts files or local only user accounts on each server? What would people like Facebook/Google use for their DNS and authentication on their servers? We have an environment where we have about 10-15 linux servers and 1-2 windows servers. We are currently using Windows AD and Windows DNS but it doesn't seem like it's the most secure/stable/scalable way to do it for a mainly linux environment? We use RHEL as our linux environment.

    Read the article

  • Send all traffic over VPN connection not working Windows VPN host

    - by Adam Schiavone
    I am trying to get a mac (10.8) to connect to thru vpn to a server running Windows Server 2008 R2 pass all requests from the mac to the server. The VPN is setup and I can connect and access the server thru a web browser, but for all other sites, the DNS lookup fails. I have tried adding a DNS server to the VPN Host. ex. Lets say the the VPN server also hosts a website example.com. I connect to the VPN with my mac and point a browser to example.com and everything works fine. but when I point the browser to google.com it just sits there and will eventually come back with a DNS lookup failed message. HOWEVER: I tried running the command dig @myServersIpHere www.google.com. on the mac and it comes back with correct IP addresses. I really dont know what to do from here. How can I route all requests from my mac, thru my windows server via VPN?

    Read the article

  • How can you connect to a SQL Server not on your domain?

    - by scotty2012
    I have a test machine that's not allowed on our domain because we are testing corporately unsupported applications (SQL 2008 and Server 2008). I want to use management studio to connect to the SQL2008 server but can't get it working. I have authentication set to mixed-mode, I've checked 'allow remote connections to this server', but when I try to access it, I get the error A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53) Since it says the provider is Named Pipes, I enabled Named Pipes on the server, but still no dice. I've tried connecting to the system name, the IP, the system name\instance and IP\instance, all to no avail. Is what I'm trying to do not possible? Edit: Well, through some basic troubleshooting, I've found that I can't ping the server from my client computer, but I can ping the client computer from the server? They are both plugged into the same switch, and are sitting next to each other. The windows firewall on the server is turned on, is there some specific settings I need to enable? DAH! So it was the firewall blocking me. How can I enable the firewall and still connect?

    Read the article

  • FTP error 425 failed to establish connection

    - by cKK
    Getting "ftp error 425 failed to establish connection" when trying to connect to ftp server. Tried 2 ftp clients on 3 machines on same network and none work. However FTP works from home / mobile broadband. No ip blocks on ftp sever. Other ftp servers(differrent ip/hosts) work okay. firewall setup correct, no ports blocked. Is it possible to use a proxy for ftp a i think it's something with the ISP but taking too long to fix?

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • syslog-ng fails to log on lxc host

    - by christian
    we are running CentOS 6 servers with multiple lxc-containers. For system logging we are using syslog-ng. After a while the syslog-ng daemon stops logging messages, but the daemon keeps running. This happens on the host and inside the containers (where another syslog-ng is running) as well. We could not find any patterns for the failure yet but we assume that it has something to do with lxc, because we don't have these problems on other hosts. We have the suspicion that these problems occur when more than on lxc-container is running and that only "new" processes can not log. We are running the following software versions: CentOS-Linux 6.4/6.5 lxc-0.7.5 syslog-ng-3.2.5 Do you have any ideas? Best regards trademesh

    Read the article

  • GNS3 Cannot ping/resolve DNS record

    - by Eldad Cohen
    I set up an internet lab with GNS3, which has 3 routers, in each node there is a computer directly connected. One of the hosts is a DNS server, Windows 2003 Server. The other one is a Windows XP machine. Ping is good between routers and machines but no ability to ping domain.com record on DNS server 2003. I set a static nat on the router to route all traffic from gateway to the DNS server internal ip address, still no answer for the dns request. Any ideas or thoughts will be most welcome.

    Read the article

  • Work around for yahoo mail slowness (using 100% cpu)

    - by Tony Lee
    My yahoo mail is very slow sometimes. When it is, I notice that IE8 is using 100% cpu. Using sysinternals process explorer I discovered the thread using all the cpu in IE8 has Flash in the stackwalk. I upgraded flash from 9 to 10, but the problem persists. I'm about to edit hosts to block the flash content by redirecting the yahoo and ad click dns entries. Is there some easier way to get flash to behave? The fix for the long run will be switching to gmail.

    Read the article

  • associate dhcp requests with subdomains in dnsmasq

    - by Dezra
    I have dnsmasq running as a dns server with a number of linux boxes using static ips that run several virtual hosts on subdomains. I currently have the following address line in my dnsmasq.conf to map the subdomain of a boxes address to the boxes static ip: address=/.devbox1.mydomain.com/192.168.1.3 address=/.devbox2.mydomain.com/192.168.1.4 e.g. site1.devbox1.mydomain.com > maps to devbox1 static ip, site1 virtual host site2.devbox1.mydomain.com > maps to devbox1 static ip, site2 virtual host site3.devbox2.mydomain.com > maps to devbox2 static ip, site3 virtual host I was wondering if I can change the machines over to DHCP addresses (instead of static) and have dnsmasq use the dhcp ip instead of the static one? Can I modify the address line to refer to the DHCP address (obviously, I cant hardcode the address)? I know I could add mac address to ip allocation, but I want to avoid this if possible.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >