Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 345/792 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Windows 7 caches FTP credentials?

    - by Martin Booka Weser
    On my remote maschine i have an iis 7.5 (win server 2008) and set up an ftp site with iis manager authentication. I then did active directory user isolation and isolated my users to physical folders according to their names. So far, so good. I can access with ftp cliens from everywhere with different test accounts that i previously set up in the iis manager auth. Every user connects to its own folder. When i now tested with windows 7 as a client i did the following. Explorer - computer - right click - add network address - the ip of my remote maschine - user1 - password1 Perfect - it works. I now want to connect with user2. So I deleted this network address and set up a new connection, but with user2 (or even anonymous) instead. Now the strange thing: Windows doesn't even ask me for a password again. It just connects me to the folder of the user1. I already disabled ftp caching in the IIS and i disabled the user1 account in IIS manager authentication! Still, if i set up a network connection with this windows 7 it connects to the folder user1 . No matter which username i use (anonymous, administrator, user2,...). And if i connect with other ftp clients or other computers it all works perfectly. So I assume that this one windows somehow caches the credentials... But then, why does the IIS still accepts this credentials even if i disabled this user1 account??? Thanks.

    Read the article

  • Access NFS share from cygwin?

    - by Jason Voegele
    We have a Windows 2003 Server on which we have installed Microsoft's Services for UNIX, and we have mounted a few NFS shares that contain shared resources that we need to access from this box. When I log in to this server with remote desktop, I am able to browse the contents of the NFS shares and everything works fine. However, one use case that we have is that we need to access this server using SSH, and still be able to access the NFS shares. We are running the Cygwin SSH daemon to provide SSH access to the server, but for some reason when we log in to the Windows 2003 server using SSH we can no longer access the NFS shares. To demonstrate, here is the output of the 'mount' command, first from a Cygwin shell when logged in with remote desktop: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) O: on /cygdrive/o type nfs (binary,posix=0,user,noumount,auto) P: on /cygdrive/p type nfs (binary,posix=0,user,noumount,auto) Z: on /cygdrive/z type nfs (binary,posix=0,user,noumount,auto) And now, the same 'mount' command when logged in with SSH: $ mount C:/cygwin/bin on /usr/bin type ntfs (binary,auto) C:/cygwin/lib on /usr/lib type ntfs (binary,auto) C:/cygwin on / type ntfs (binary,auto) C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto) Notice the missing O: P: and Z: NFS shares in the latter. Can anyone tell me why I am unable to see these NFS shares when logged in with SSH? Thanks!

    Read the article

  • Is there a better way to do bonded vlan tagged interfaces with XEN

    - by AJ01
    We have a number of XEN servers all running CentOS or RHEL. The VM's that they run are all required to be on their own VLAN for no other reason than the customer expects them to be. Long story short however, I can't change this right now. We are also required to have bonding enabled on the interfaces. So to accommodat this we enslave eth1 and eth2 to bond0. We then create a seperate interface called bond0.VLANID where VLANID corresponds to the correct vlan; eg ifcfg-bond0.204 DEVICE=bond0.204 BOOTPROTO=static ONBOOT=yes VLAN=yes BRIDGE=xenvlan204 Bridge to XEN As you will see, we eventually have to bridge this out to XEN, and we do this by adding another interface called xenvlan204 (in this instance) which contains; ifcfg-xenvlan204 DEVICE=xenvlan204 BOOTPROTO=none ONBOOT=yes TYPE=bridge XEN Vm Config Finally in our XEN config for each VM, we add vif = [ "bridge=xenvlan204" ] This then allows the vm host to access that particular vlan The Problem We've noticed a few problems with this setup. One being that we currently create the interfaces manually. Which means if we add more vlan enabled interfaces and bridges we usually have to restart xend which is something I'm not so hot about. Also lower level staff have their heads melted by the number of interfaces and the risk of a mistake occurring is high. Secondly, it can take sometime for a host to come up if it has a number of vlan taged interfaces. Thirdly, its just not scaling well on the management aspects The Question Is there a better more flexible way to do this (in particular with Xen that ships with centos 5.3, 5.4 and 5.5 as we have to support all three) that leverages either scripting or other solutions to allow an arbitrary amount of interfaces to be created when a vm is instanced. Your advise and expertise is more that welcomed.

    Read the article

  • Unable to connect via NetBIOS Name

    - by grom
    I can't connect to machines/shares by NetBIOS names. Below is console output showing the problem. C:\>nbtstat -n Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Local Name Table Name Type Status --------------------------------------------- BEAST <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BEAST <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered WORKGROUP <1D> UNIQUE Registered ..__MSBROWSE__.<01> GROUP Registered C:\>nbtstat -A 192.168.1.3 Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- BRCLAPTOP <00> UNIQUE Registered WORKGROUP <00> GROUP Registered BRCLAPTOP <20> UNIQUE Registered WORKGROUP <1E> GROUP Registered MAC Address = 00-1C-BF-14-B8-6E C:\>ping beast Pinging beast [fe80::59b8:179f:b90b:a63f%11] with 32 bytes of data: Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Reply from fe80::59b8:179f:b90b:a63f%11: time<1ms Ping statistics for fe80::59b8:179f:b90b:a63f%11: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\>ping brclaptop Ping request could not find host brclaptop. Please check the name and try again. C:\>nbtstat -a brclaptop Local Area Connection: Node IpAddress: [192.168.1.100] Scope Id: [] Host not found.

    Read the article

  • Nginx/puma rhel unix socket permission error?

    - by Kevin Brown
    When I try to start my puma server, I get the error: /.rvm/gems/ruby-2.1.1/gems/puma-2.9.0/lib/puma/binder.rb:275:in `initialize': Permission denied - connect(2) for "/var/run/nvhbase.sock" (Errno::EACCES) My sites-available/nvhbase.conf file: upstream nvhbase { server unix:/var/run/nvhbase.sock; } server { listen 80 default_server; server_name 207.131.132.219; root /home/vf032500/dev/nvh/public; location / { proxy_pass http://unix:/var/run/nvhbase.sock; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } I don't know a lot about unix sockets and everything works fine using tcp/puma default. My rails app is in my user directory. Is that the problem?? socket is starting in /var/run--I can start in /tmp, but I've heard that's bad practice? Provided I start the server in /tmp, I then can't access it via the server's ip--then what? I'm happy to provide any needed info, I just don't know a whole lot about server/nginx/puma.

    Read the article

  • Why is IIS 7.5 seeing some requests as HTTP/1.0?

    - by Zhaph - Ben Duguid
    While trying to work out why Static File Compression wasn't working on one of our IIS servers, the error was coming back as "NO_COMPRESSION_10" which translates to: Server not configured to compress 1.0 requests Looking at the requests in Fiddler, I can see that I'm requesting HTTP 1.1, but everything is being sent back as HTTP 1.0: Request (from chrome, captured via Fiddler): GET /css/reset.css HTTP/1.1 Host: [-----].com Connection: keep-alive Cache-Control: max-age=0 If-Modified-Since: Tue, 16 Oct 2012 15:04:34 GMT User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.95 Safari/537.11 Accept: text/css,*/*;q=0.1 Referer: http://[-----].com/ Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en;q=0.8,en-US;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Response from IIS: HTTP/1.0 200 OK Cache-Control: no-cache, no-store Pragma: no-cache Content-Type: text/html; charset=utf-8 Expires: -1 Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Tue, 11 Dec 2012 11:57:03 GMT Connection: close Content-Length: 108837 Other servers with the same host that I'm running this site on all respond with HTTP/1.1. How can I persuade IIS to respond with HTTP/1.1 rather than HTTP/1.0? Edit to add: Digging deeper, I can see that some responses from the server are indeed being returned compressed, so I guess really I'm trying to work out why talking to this particular server from our office seems to result in it seeing 1.0 requests, while other servers at the same co-loc don't?

    Read the article

  • tomcat 'document base does not exist' error (but it does)

    - by SpliFF
    Gentoo / Tomcat 6 INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Sep 8, 2009 10:34:51 AM org.apache.catalina.core.StandardContext resourcesStart SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base /www/rivervalley/site does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(Unknown Source) at org.apache.catalina.core.StandardContext.resourcesStart(Unknown Source) at org.apache.catalina.core.StandardContext.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) at org.apache.catalina.core.StandardHost.start(Unknown Source) at org.apache.catalina.core.ContainerBase.start(Unknown Source) oh really? then how come: ls -la /www/rivervalley/site/ drwxr-xr-x 12 tomcat tomcat 4096 Sep 8 09:56 . drwxr-xr-x 16 tomcat tomcat 4096 Jun 29 16:22 .. -rwxr--r-- 1 tomcat tomcat 520 Jul 3 02:15 Application.cfm drwxr-xr-x 2 tomcat tomcat 4096 Sep 8 09:56 WEB-INF and ... tomcat 18916 1.0 5.5 1159188 167892 ? Ssl 10:37 0:11 /opt/sun-jdk-1.5.0.18/bin/java -Djava.util.loggin Hell, ANY account can read that directory so the claim is utter nonsense. What else can cause this? Here's my relevant server.xml section: <Host name="rivervalley" appBase="webapps" unpackWARs="false" autoDeploy="false" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/www/rivervalley/site" /> </Host>

    Read the article

  • My Reverse DNS PTR record seems to be right, but I'm still getting bouncing email

    - by johnbr
    Hello, I have a service (statusme.com) where I let people know (for example) that their kid's soccer games are cancelled because of bad weather. We send out emails to the people who have registered. I have a second server as a backup, (vps.statusme.com) and I've set up the application to send some of the email through the second server. But I'm getting complaints from various recipient SMTP servers that the email is considered spam. So I did some investigating, and it appears that they think my reverse DNS record isn't correct. But when I look at it via various rDNS websites and instructions I found elsewhere on ServerFault, everything looks correct: jb$ host -t a vps.statusme.com 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: vps.statusme.com has address 66.84.8.246 jb$ host -t ptr 246.8.84.66.in-addr.arpa 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: 246.8.84.66.in-addr.arpa domain name pointer vps.statusme.com. I'm confused about what I'm doing wrong. Thanks for any suggestions.

    Read the article

  • What's going on with traceroute?

    - by Kevin
    The following is what happens when I run traceroute from a certain location: # traceroute google.com traceroute to google.com (74.125.227.39), 30 hops max, 60 byte packets 1 gateway.local.enactpc.com (10.0.0.1) 0.138 ms 0.101 ms 0.084 ms 2 * * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 * * * 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * Absolutely nothing of interest... Now, originally I thought this was just a fact of the location's network set up. (I assume they block pings or something...) However, watch what happens when I use nmap to run a traceroute... # nmap -sP --traceroute google.com Starting Nmap 5.21 ( http://nmap.org ) at 2012-09-25 22:18 CDT Nmap scan report for google.com (74.125.227.40) Host is up (0.034s latency). Hostname google.com resolves to 11 IPs. Only scanned 74.125.227.40 rDNS record for 74.125.227.40: dfw06s06-in-f8.1e100.net TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 0.19 ms gateway.local.enactpc.com (10.0.0.1) 2 1.93 ms 99-20-92-1.lightspeed.austtx.sbcglobal.net (99.20.92.1) 3 25.61 ms 99-20-92-2.lightspeed.austtx.sbcglobal.net (99.20.92.2) 4 ... 6 7 23.68 ms 12.83.68.137 8 31.30 ms gar23.dlstx.ip.att.net (12.122.85.73) 9 ... 10 31.82 ms 72.14.233.65 11 32.27 ms 209.85.250.77 12 32.98 ms dfw06s06-in-f8.1e100.net (74.125.227.40) Nmap done: 1 IP address (1 host up) scanned in 3.29 seconds When using nmap I get A LOT more results than with traceroute, why? Note, I checked, and the difference in target IP addresses is not related...

    Read the article

  • Oracle Error ORA-12560 TNS:Protocol Adapter error?

    - by David Basarab
    I am using Oracle Database 10g. Both Servers are Windows 2003. I have an Orcale Database set up on one server. Here is the TNSNames.ora from the server with the database. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL.VIRTUALHOLD.COM = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) The Environmental Variables on the Server are ORACLE_HOME = C:\oracle\product\10.2.0\db_1 ORACLE_SID = orcl I am trying to connect to it from another box that has Oracle Client installed. Here is the tnsnames.ora installed on the other client server. # tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\client_1\network\admin\tnsnames.ora # Generated by Oracle configuration tools. ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = databaseServer)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = orcl) ) ) ORACLE_HOME = C:\oracle\product\10.2.0\client_1 ORACLE_SID = orcl Locally on the database server I can connect to through sqlplus with no issues. On the client machine I keep getting the error: ORA-12560: TNS:protocol adapter error What am I missing? Does the client TNSNames.ora need to be different?

    Read the article

  • Encrypted folders and files stay encrypted when copied

    - by user66126
    Hi all, I just tried out Windows 7 feature to encrypt a folder. I found that when I access the encrypted folder from another computer (the parent folder of the encrypted folder is shared) I can see the files there but I could not open it (which is good). But when I copy the file to another folder outside the encrypted folder (regardless it is on the same remote computer or to the computer from where I am accessing the files) then I can open the file without any problem. This might be how it works ... but that's not what I need. My question is: Can I encrypt a folder (and all files inside), access those files (create, edit) seamlessly while I am logged in normally to the computer ... but make the files stays encrypted when they were copied to another directory outside the encrypted folder? Regardless they were copied to the same computer or another computer or uploaded to a remote server. If this is not a feature that Windows natively support, is there third party software that does that? Thank you.

    Read the article

  • How to avoid sshfs freezing?

    - by Andreas Hagen
    So the issue is this: I've installed sshfs on Ubuntu 12.04 and I'm trying to connect to a couple of remote servers. So initially the mount seams successful. Sometimes Gnome even picks it up and displays the "new device found" box at the bottom of the screen. but from here on there is not much that works. Or at least not any more. The first couple of times i connected it seamed to work fine, and I was able to transfer some files, then i disconnected using fusermount -u <folder> and after reconnecting a little later the trouble started. Now after executing sshfs -o ServerAliveInterval=15 -o reconnect -C -o workaround=all -o idmap=user root@<host>:/ <folder>, when I change directory into the mount-point, the shell just freezes. Strangely ls -al <folder> works when listing just the root of the remote system, but nothing more. Also every file-explorer I've tried freezes just like cd <folder>. To me it seamed like there was some kind of zombie thread or something hanging around my system, due to the fact that it did work the first time, so I have tried rebooting but no luck. sshfs -V gives this: SSHFS version 2.3 FUSE library version: 2.8.6 fusermount version: 2.8.6 using FUSE kernel interface version 7.12 So yea, any ideas?

    Read the article

  • Should an HA failover occur in this scenario?

    - by joeqwerty
    I'm running vSphere 5 in an HA cluster across two hosts (vsphereA and vsphereB). I have the HA cluster configured for host monitoring and datastore heartbeat monitoring with admission control disabled (hopefully I rightfully understand that datastore heartbeat monitoring prevents inadvertent and unwanted HA failovers due to management network isolation). Each host has a single connection to a dedicated iSCSI network and iSCSI target (no MPIO). All vmdk's for all VM's exist on the iSCSI datastore. As a test of HA I disconnected the iSCSI connection on vsphereB and was surprised to see that the running VM's on vsphereB continued to run on vsphereB. The powered off VM's were showing as inaccessible (which I expected due to the fact that they weren't running and the connection from vsphereB to the iSCSI target was severed) but the running VM's continued to run and continued to be "owned" by vsphereB. I expected to see an HA failover occur for those VM's and expected to see them "owned" by vsphereA after the HA failover (which didn't occur). I'm at a loss to understand why an HA failover didn't occur for those VM's. Am I misunderstanding in which cases an HA failover should occur?

    Read the article

  • Can a website company that builds 4-5 websites a year afford dedicated hosting?

    - by Petras
    We manage about 30 websites that use shared ASP.NET SQL Server web hosting. These are typical small/medium business websites and they perform fine in this environment. Recently I was looking at VPS hosting in this thread http://serverfault.com/questions/128329/how-do-you-host-multiple-public-facing-websites-on-a-vps After contacting a provider in one of the replies I was told that VPS hosting is not recommended for 30 sites, even if they are small. The resource requirements might be too great even for VPS. So I should turn to dedicated hosting. The lowest cost dedicated hosting is $219 per month (see http://www.serverintellect.com/dedicated/pentiumdservers.aspx). But this is only for a single processor which seems too light for a machine running both IIS and SQL. In our office all the developers work on quad cores so I assume I’d really need the Quad Processor. However, this starts at $599 monthly. Now, I won’t be able to transfer all of our 30 sites to this machine. I’d only be able to transfer say 5 or 6. However, moving forward, I’d be able to host all future sites on this machine. This amounts to 4-5 per year. Let’s look at the economics. Shared hosting costs are typically $16.95 monthly (see http://www.crystaltech.com/dotnet.aspx). So here’s the dilemma First months costs: $599 First month revenue: 6x$16.95 = $101.7 Loss in first month: $497.3 First year costs: $599x12=$7188 First month revenue: 6x$16.95x12 + 5x$16.95x6(averaged) = $1728.9 Loss in first year: $5459.1 Clearly it is going to take years for this server to pay for itself. It just doesn’t seem economical! Am I missing something here, or is dedicated not the way to go with the amount of sites we build?

    Read the article

  • Contacts in Outlook 2003/2007, some questions

    - by Ernst
    If I create a distribution list and then select members, can I see different fields than the default ones? In 2007 there are radio buttons for 'name only' and 'more columns', but the latter does seem to only result in no results at all, regardless of which address book I choose. In 2003 there is no such thing. Is there a plug in that will break up the recipients (whether they be to, cc, or bcc) in groups of X, and send then a number of mails as required? Our host allows only 50 recipients per mail and only 300 total recipients per 5 minutes. I know the email client blat has exactly this functionality, but it does not seem to be able to connect to the exchange server to get the contacts needed. Could I maybe set outlook to send to blat which then does the breaking up as necessary? Can I (or is there a plug in for this) export only part of the contacts instead of all of them? Note that we send mail outside our organisation via our web host where we've got a few mailboxes, and we use our exchange (2000) server only internally, the few people that can send email to the outside world have an external mailbox as well as their exchange account defined. I might be able to convince our general boss that we can simply give (some) people the ability to send outside via exchange, but I might just as well not succeed. Alternatively, is there another program that can connect to exchange to get the contacts (selected based on categories) and then send via smtp in groups with delays between the mails?

    Read the article

  • Nginx order of servers

    - by scrat
    I have 3 sites on my server. All are running on gunicorn and use unix sockets to communicate with nginx which routes requests. I got three records in nginx.conf like: server { listen 80; server_name site1.com; location / { proxy_pass http://unix:/tmp/site1.sock; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } For site1, site2, site3. If they are ordered as config for site1 goes first, and then goes config for site2 and site3 everything works good. But when I change the order for example to site2, site1, site3, then site1 becomes routed to site2. What am I doing wrong? Full server nginx.conf before servers configs: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon;

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • ssh initial prompt hangs for 10 minutes but console login and initial prompt is very responsive - why?

    - by rfreytag
    I have been running an ESXi 4.0 server for months with a couple of WinServer2003 and several Ubuntu Server 10.4 VMs. The performance has been impressive on 6GB i7 Asus P6T hardware. Suddenly, a week ago, ssh logins to the Ubuntu VMs take 10 minutes when connecting over the LAN (over a WAN the connection (pipe) is broken long before that). When logging in to these VMs the password prompt arrives immediately, and failed passwords are responded to immediately. But the moment I log in then the shell prompt appears and I hang for many minutes. Sometimes the connection hangs before the shell prompt appears and sometimes I can type in a command but the moment I hit return the machine hangs. 10 full minute later control returns and the VM is responsive. NOTE: there are several Ubuntu VMs on the same host machine that are identical in all ways that I can tell. However, only one of the VMs displays this behavior. That is why I mention the ESXi host in passing - I don't think it has anything to do with the problem. This behavior is never seen when I connect with the troubled-VM's console (through vSphere Client). From the console the Ubuntu VMs all respond beautifully. I have seen: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=1003496&sliceId=1&docTypeID=DT_KB_1_1&dialogID=229586372&stateId=1%200%20229588522 ...and since that relates to delays in seeing the password prompt that does not appear to be the solution here. Any other suggestions very welcome - thank you.

    Read the article

  • Datacentre Rack naming convention with flexibility for reassignment of server roles

    - by g18c
    We are just shifting across to a new rack and until now have used names of cartoon characters. This is not going to work anymore, and need a better naming convention. Physically i would like to name the servers by location, and then have an alias as to its actual function/customer, i.e. Physical name LONS1R1SVR1 meaning London, suite 1, rack 1, server 1 Customer Alias Since the servers can be reassigned from time to time, for the above physical server name, i would have an alias as a column in a spreadsheet, that would be set to the customers host-name, i.e. wwww.customerserver1.com Patching For patching, I am looking at labeling up the physically connections, i.e. LON1S1R1SVR1-PWR1 LON1S1R1SVR1-PWR2 LON1S1R1SVR1-ETH0 LON1S1R1SVR1-KVM Ultimately if i am labeling cables, I really want to avoid putting LON1S1R1SQLSVR on any patch cord in case the server gets formatted and changed from a SQL server to a WWW server which would need to relabel all the patch cords also. In addition, throwing in virtual machines, i have got confused very quickly. I appreciated that it may be confusing having a physical host-name and customer alias. Please let me know what you run with and any other standards or best practices that i can follow?

    Read the article

  • Isolating Apache virtualhosts from the rest of the system

    - by JesperB
    I am setting up a web server that will host a number of different web sites as Apache VirtualHosts, each of these will have the possibility to run scripts (primarily PHP, possiblu others). My question is how I isolate each of these VirtualHosts from eachother and from the rest of the system? I don't want e.g. website X to read the configuration of website Y or any of the server's "private" files. At the moment I have set up the VirtualHosts with FastCGI, PHP and SUExec as described here (http://x10hosting.com/forums/vps-tutorials/148894-debian-apache-2-2-fastcgi-php-5-suexec-easy-way.html), but the SUExec only prevents users from editing/executing files other than their own - the users can still read sensitive information such as config files. I have thought about removing the UNIX global read permission for all files on the server, as this would fix the above problem, but I'm not sure if I can safely do this without disrupting the server function. I also looked into using chroot, but it seems that this can only be done on a per-server basis, and not on a per-virtual-host basis. I'm looking for any suggestions that will isolate my VirtualHosts from the rest of the system. PS I'm running Ubuntu 12.04 server

    Read the article

  • VMware Workstation "Power off" and "Suspend" buttons are disabled. What is going on?

    - by Ed Norris
    I've been using an XP image for a few months now with no problems. Recently the power off and resume buttons were disabled and I'm not sure what happened to cause that. In addition, the menu items to do those functions are grayed out (like VM | Power | Power Off) I'm using VMware Workstation 6.5.3 on a Windows 7 64-bit host. The image is Windows XP 32-bit. There is plenty of free space and memory on the host and the CPU is not pegged. I am able to power off the image through its Start menu, but that's a workaround not a fix. Any suggestions? TIA EDIT: Well after manually shutting down the image then closing and reopening the image file (which I've done before) in preparation for reinstalling the tools as suggested below, it looks fine. The power off and suspend buttons are enabled and work. So what do I do with this question now? "Close and restart a few times and it may work" doesn't seem useful...

    Read the article

  • Windows Server 2008 R2 Virtual Network Setup

    - by jpearl01
    Some background: I'm very much new to networking in general, and virtualization in particular. I'm trying to set up a series of VMs as we are transitioning to a thin client setup. I have been supplied a limited number of static ip addresses. The server is located in an offsite building which houses the network we use to connect to the internet, share folders etc. The setup I've been trying to go for is this: The host OS (Windows Server 2008 R2) is bound to one nic using one of the static ips (say, Nic1 and ip 10.255.6.61). I've set up another external virtual network attached to another physical nic , and a virtual private network attached to no nic. There is one VM running the same os (as the host). This VM is connected to both the external virtual network (and uses another static ip say Nic2 and ip 10.255.6.62) and also to the virtual private network (I gave it a static random ip 192.168.88.1 subnet mask 255.255.255.0). This virtual private network is connected to all the other VMs. I'd like to share the internet connection with all the other VMs on the private virtual network, and so I installed the RRAS role on the server connected to Nic2, and selected the option to share the internet over the vpn. I've run through the RRAS wizard a few times, trying different configurations, but none of them seem to be letting the other vms connect to the 'net. The vms seem to connect to the virtual private network fine, they are assigned an ip address and everything, but no internet, and no rest of the network either. The other problem is in general I connect to the vms with RDP. Will that be possible with a setup like this? i.e. will the vms show up as computers on the network? If not, what are my other options? Thanks! ~josh

    Read the article

  • SNMP closed state in CentOS

    - by anksoWX
    I'm having a problem here, I've added to my IPtables rules this: -A INPUT -p tcp -m state --state NEW -m tcp --dport 161 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 161 -j ACCEPT but when I scan with nmap or any other tool it says this: Not shown: 998 filtered ports PORT STATE SERVICE 22/tcp open ssh 161/tcp closed snmp also when I am doing: netstat -apn | grep snmpd tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 3669/snmpd<br> udp 0 0 0.0.0.0:161 0.0.0.0:* 3669/snmpd<br> unix 2 [ ] DGRAM 226186 3669/snmpd Also: service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 3 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 4 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:161 5 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:161 6 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 7 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) num target prot opt source destination Any idea what's going on? There is no UDP in closed/open state. what do I have to do?

    Read the article

  • Setting up ssh config file with id_rsa through tunnel

    - by Rubens
    I've been struggling to set up a valid configuration to open a connection with a second machine, passing through another one, and using an id_rsa (which requests me a password) to connect to the third machine. I've asked this question in another forum, but I've received no answer that could be considered very helpful. The problem, better described, goes as follows: Local machine: user1@localhost Intermediary machine: user1@inter Remote target: user2@final I'm able to do the entire connection using pseudo-tty: ssh -t inter ssh user2@final (this will ask me the password for the id_rsa file I have in machine "inter") However, for speeding things up, I'd like to set my .ssh/config file, so that I can simply connect to machine "final" using: ssh final What I've got so far -- which does not work -- is, in my .ssh/config file: Host inter User user1 HostName inter.com IdentityFile ~/.ssh/id_rsa Host final User user2 HostName final.com IdentityFile ~/.ssh/id_rsa_2 ProxyCommand ssh inter nc %h %p The id_rsa file is used to connect to the middle machine (this requires me no password typing), and id_rsa_2 file is used to connect to machine "final" (this one requests a password). I've tried mixing up some LocalForward and/or RemoteForward fields, and putting the id_rsa files in both first and second machines, but I could not seem to succeed with no configuration whatsoever. Hope somebody can help me here! Regards! P.S.: the thread I've tried to get some help from: http://www.linuxquestions.org/questions/linux-general-1/proxycommand-on-ssh-config-file-4175433750/

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >