Search Results

Search found 5123 results on 205 pages for 'gateway timeout'.

Page 157/205 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Cannot connect to windows server by name over vpn connection

    - by ErocM
    I have a rented dedicated windows server on a public ip that is acting as a SQL Server and VPN server. I need to connect to this server via computer name to get replication in place. I cannot use an ip address due to this issue: So, due to this, we are going the VPN route. That is my primary issue: After I am connected to this server's vpn, I can connect to SQL Server using the ip address but I cannot connect by the computer's name as you can see below... Right now, there is no hardware firewall on it since I had it removed to test this issue. I am running Windows 2008 Enterprise Server as the VPN server. I am not sure if the route print will help any from the workstation trying to connect but here is the info: IPv4 Route Table Active Routes: Network Destination Netmask Gateway Interface Metric 10.0.0.0 255.0.0.0 10.0.0.1 10.0.0.2 21 10.0.0.2 255.255.255.255 On-link 10.0.0.2 276 Any other info needed? Thanks for the help! ========= CLARIFICATION ON A FEW THINGS #1 ========= This is the server's info: This is the workstation that is trying to connect: I connect to the server via "Control Panel\Network and Internet\Network and Sharing Center\Connect or Disconnect" You can see here that I am connected: ========= CLARIFICATION ON A FEW THINGS #2 ========= I've tried to connect directly to the Sql Server as I did above but with the computers name and I couldn't get to it. Here I am trying to net view it from the workstation and it couldn't find it:

    Read the article

  • Which MAC address is the right one?

    - by Paul Dinh
    Result by 'getmac': C:\>getmac Physical Address Transport Name =================== ========================================================== 72-03-C6-48-59-34 \Device\Tcpip_{8AEB3263-18C4-449E-A80F-BC2541DDC2A9} 00-21-9B-D5-6F-EE \Device\Tcpip_{C2F9CE19-D68F-4105-9766-45CBE6D82331} 00-22-68-D2-9B-F7 \Device\Tcpip_{A2701130-9221-43FE-8F14-7B1114F84DC3} Result by 'ipconfig /all': C:\>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : xps-m1530 Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Mixed IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No Ethernet adapter Wireless Network Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Dell Wireless 1395 WLAN Mini-Card Physical Address. . . . . . . . . : 00-22-68-D2-9B-F7 Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Autoconfiguration IP Address. . . : 169.254.246.4 Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Marvell Yukon 88E8040 PCI-E Fast Eth ernet Controller Physical Address. . . . . . . . . : 00-21-9B-D5-6F-EE Dhcp Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 192.168.1.112 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DNS Servers . . . . . . . . . . . : 8.8.8.8 8.8.4.4 Lease Obtained. . . . . . . . . . : 01 November 2012 9:00:36 AM Lease Expires . . . . . . . . . . : 04 November 2012 9:00:36 AM There is a MAC address on the back of my laptop, but the sticker is no longer there. So I use the 'getmac' command to get the MAC addresses. But which address shown by 'getmac' above is the one matching the MAC in the sticker on the back of my laptop? Or am I mistaken something? 00-21-... is the ethernet adapter, 00-22-... is the wireless adapter, and 72-03-... is what?

    Read the article

  • IIS login / logout interferes with media player

    - by Mark Sowul
    So I'm listening to music with Windows Media Player, going insane because the music randomly stops playback every so often. I finally notice that it correlates with instances of csrss, winlogon, and logonui repeatedly starting and quitting. I finally tracked that down to IIS repeatedly logging on and off due to WebDAV requests going through my user account (my laptop syncing up with OneNote over WebDAV). I see tons of spam in my security event log for the logins. I am surprised that IIS needs to log in this much. This has only been happening for a couple of months. I'm not sure where the actual problem lies - with IIS, or Media Player, or what, so I figured I'd try and find out if the IIS login behavior is actually abnormal. Is it normal for IIS to log in this much? And is it normal for that to keep spawning winlogon, csrss, and logonui every second or so? I see a constant stream of logon events in the event log every few seconds presumably while OneNote is syncing. Logon (id = 1, source = laptop), Special Logon (id = 1, sets up privileges), Special Logon (id = 1, seems to set up the same privileges), Logon (id = 2, same laptop), Logoff (id = 2), Logoff (id = 1) The DefaultAppPool (only one apparently in use) has its idle timeout set to 20 minutes, and load user profile set to false. Not sure what other settings (if any) might be relevant.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • iptables advanced routing

    - by Shamanu4
    I have a Centos server acting as a NAT in my network. This server has one external (later ext1) interface and three internal (later int1, int2 and int3). Egress traffic comes from users via int1 and after MASQUERADE goes via ext1. Ingress traffic comes from ext1, MASQUERADE, and goes via int2 or int3 according to static routes. | ext1 | x.x.x.x/24 +---------|----------------------+ | | | Centos server (NAT) | | | +---|------|---------------|-----+ | | | int1 | | int2 | int3 10.30.1.10/24 | | 10.30.2.10/24 | 10.30.3.10/24 ^ v v 10.30.1.1/24 | | 10.30.2.1/24 | 10.30.3.1/24 +---|------|---------------|-----+ | | | | | | | v v | | ^ -Traffic policer- | | |_____________ | | | | | +------------------|-------------+ | 192.168.0.1/16 | | Clients 192.168.0.0/16 The problem: Egress traffic seems to be dropped after PREROUTING table. Packet counters are not changing on MASQUERADE rule in POSTROUTING. If I change the routes to clients causing the traffic go back via int1 - everything works perfectly. current iptable configuration is very simple: # cat /etc/sysconfig/iptables *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -I INPUT 1 -i int1 -j ACCEPT -A FORWARD -j ACCEPT COMMIT *nat -A POSTROUTING -o ext1 -j MASQUERADE # COMMIT Can anyone point me what I'm missing? Thanks. UPDATE: 192.168.100.60 via 10.30.2.1 dev int2 proto zebra # routes to clients ... 192.168.100.61 via 10.30.3.1 dev int3 proto zebra # ... I have a lot of them x.x.x.0/24 dev ext1 proto kernel scope link src x.x.x.x 10.30.1.0/24 dev int1 proto kernel scope link src 10.30.1.10 10.30.2.0/24 dev int2 proto kernel scope link src 10.30.2.10 10.30.3.0/24 dev int3 proto kernel scope link src 10.30.3.10 169.254.0.0/16 dev ext1 scope link metric 1003 169.254.0.0/16 dev int1 scope link metric 1004 169.254.0.0/16 dev int2 scope link metric 1005 169.254.0.0/16 dev int3 scope link metric 1006 blackhole 192.168.0.0/16 default via x.x.x.y dev ext1 Clients have 192.168.0.1 as gateway, which is redirecting them to 10.30.1.1

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Sending email to google apps mailbox via exim4

    - by Andrey
    I have a hosting server with several users. One of the customers decided to move his email account to google apps and added the corresponding MX records so he can receive email now. But when it comes to sending email from my server to those email addresses, they don't make it. I guess it's because exim still thinks these domains are local. That's what i see in logs (example.com is my domain, example.net is the customer's domain): 2010-06-02 14:55:37 1OJmXp-0006yh-UG <= [email protected] U=root P=local S=342 T="lsdjf" from <[email protected]> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG ** [email protected] F=<[email protected]> R=virtual_aliases: 2010-06-02 14:55:38 1OJmXq-0006yl-2A <= <> R=1OJmXp-0006yh-UG U=mail P=local S=1113 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2010-06-02 14:55:38 1OJmXp-0006yh-UG Completed 2010-06-02 14:55:38 1OJmXq-0006yl-2A User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A == [email protected] R=localuser T=local_delivery defer (-29): User 0 set for local_delivery transport is on the never_users list 2010-06-02 14:55:38 1OJmXq-0006yl-2A ** [email protected]: retry timeout exceeded 2010-06-02 14:55:38 1OJmXq-0006yl-2A [email protected]: error ignored 2010-06-02 14:55:38 1OJmXq-0006yl-2A Completed What should i do to fix that?

    Read the article

  • Ubuntu Pound Reverse Proxy Load Balancing Based off active server load?

    - by Andrew
    I have Pound installed on a loadbalancer. It seems to work okay, except that it randomly assigns the backend server to forward the request to. I've put 1 backend machine under so much load that it went into using swap, and I can't even ssh into it to test this scenareo. I would like the loadbalancer to realize that the machine is overloaded, and send it to a different backend machine. However it doesn't. I've read the man page and it seems like the directive "DynScale 1" is what would monitor this, but it still redirects to the overloaded server. I've also put in "HAport 22" to the backend figuring since I can't ssh in, neither could the loadbalancer and it would consider the backend server dead until it gets rid of the load and responds, but that didn't help either. If anyone could help with this, I'd appreciate it. My current config is below. ###################################################################### ## global options: User "www-data" Group "www-data" #RootJail "/chroot/pound" ## Logging: (goes to syslog by default) ## 0 no logging ## 1 normal ## 2 extended ## 3 Apache-style (common log format) LogLevel 3 ## check backend every X secs: Alive 5 DynScale 1 Client 1200 TimeOut 1500 # poundctl control socket Control "/var/run/pound/poundctl.socket" ###################################################################### ## listen, redirect and ... to: ## redirect all requests on port 80 to SSL ListenHTTP Address 192.168.1.XX Port 80 Service Redirect "https://xxx.com/" End End ListenHTTPS Address 192.168.1.XX Port 443 Cert "/files/www.xxx.com.pem" Service BackEnd Address 192.168.1.1 Port 80 HAport 22 End BackEnd Address 192.168.1.2 Port 80 HAport 22 End End End

    Read the article

  • Windows 7 boot problem on a Lenovo Thinkpad Z61m 9450HAG

    - by Matt Taylor
    I recently did a full upgrade of Windows 7 on my Thinkpad. Everything worked fine after up until the second reboot (the first reboot after some updates installed worked OK). At second reboot time the system would just black screen before the Windows logo appears. Disk/wireless/power/battery lights are all lit and the disk light is active (flickering). However, if I remove my battery and boot with just power it boots fine and quickly, and everything is OK. Any help on why this won't boot with battery plugged in is greatly appreciated. I need to take this battery out on the road/trains, etc. A little more detail on this story. The battery I had inserted when doing the (failed) boot was a long life battery. I have not tried inserting this battery when Windows is logged in. I have another (normal life) battery that I have charged up within Windows. It has just got to 100% and I am about to reboot with it in. I am using the Lenovo power manager to diagnose the battery - all seems OK. I will report back shortly as to the outcome. OK, so I chose the reboot option from within Windows, the machine seemed to shutdown okay, but then stalled. It didn't turn off completely and didn't reboot, but just sat, with the fan humming, somewhere in between! I had to hold the power button in for a few seconds until the fan stopped and then hit the power button again to boot the machine from fresh. One good thing, with this battery (the normal one) it booted into Windows 7 the first time with a battery! So, now I have rebooting issues. I have 3 errors in the event log: A timeout was reached (30000 milliseconds) while waiting for the lxdxCATSCustConnectService service to connect. The lxdxCATSCustConnectService service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. The following boot-start or system-start driver(s) failed to load: cdrom Any thoughts?

    Read the article

  • PHP script not automatically updating when moved to another server

    - by user32007
    A friend built a ranking system on his site and I am trying to host in on mine via WordPress and Go Daddy. It updates for him but when I load it to my site, it works for 6 hours, but as soon as the reload is supposed to occur, it errors and I get a 500 timeout error. His page is at: jeremynoeljohnson .com/yakezieclub My page is currently at http://sweatingthebigstuff.com/yakezieclub but when you ?reload=1 it will give the error. Any idea why this might be happening? Any settings that I might need to change? Here is the top of the index.php file. I'm not sure which part of any of it is messing up. I literally uploaded the same code as him. Here's the reload part: $cachefile = "rankings.html"; $daycachefile = "rankings_history.xml"; $cachetime = (60 * 60) * 6; // every 6 hours, the cache refreshes $daycachetime = (60 * 60) * 24; // every 24 hours, the history will be written to // - or whenever the page is requested after 24 hours has passed $writenewdata = false; if (!empty($_GET['reload'])) { if ($_GET['reload']== 1) { $cachetime = 1; } } if (!empty($_GET['reloadhistory'])) { if ($_GET['reloadhistory'] == 1) { $daycachetime = 1; $cachetime = 1; } } if (file_exists($daycachefile) && (time() - $daycachetime < filemtime($daycachefile))) { // Do nothing } else { $writenewdata = true; $cachetime = 1; } // Serve from the cache if it is younger than $cachetime if (file_exists($cachefile) && (time() - $cachetime < filemtime($cachefile))) { include($cachefile); echo "<!-- Cached ".date('jS F Y H:i', filemtime($cachefile))." -->"; exit; } ob_start(); // start the output buffer ?>

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • Local SSL connections are causing redirect loop (after Ubuntu update)

    - by codeinthehole
    Following a recent Ubuntu update, my local websites are no longer serving their pages over SSL. For example, my .htaccess file attempts to ensure /sign-in is always served over HTTPS: RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} /sign-in RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [L,QSA,R=301] However when I make a request to /sign-in on the domain site2-local , I get the error "The page isn't redirecting properly" with the following in /var/log/apache2/error.log [Tue Jun 08 12:20:57 2010] [info] [client 127.0.1.1] Connection to child 0 established (server site1-local:443) [Tue Jun 08 12:20:57 2010] [info] Seeding PRNG with 656 bytes of entropy [Tue Jun 08 12:20:57 2010] [info] Initial (No.1) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.2) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.3) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.4) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.5) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.6) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.7) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.8) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.9) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:20:57 2010] [info] Subsequent (No.10) HTTPS request received for child 0 (server site2-local:443) [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] (70007)The timeout specified has expired: SSL input filter read failed. [Tue Jun 08 12:21:12 2010] [info] [client 127.0.1.1] Connection closed to child 0 with standard shutdown (server site2-local:443) There is a connection to site1-local (another site on my machine which shares the certificate), which I don't understand. Anyone know what is causing this issue?

    Read the article

  • IPv6 Routing / Subnetting

    - by nappo
    Recently I have installed Citrix Xen Server 6.2 on a machine. My Provider (Hetzner) gave me the IPv6 Subnet 2a01:4f8:200:xxxx::/64. Followed an article in the providers wiki (1) i got it working and can assign IPs to my guests (CentOS). However i can't assign a second IP to a single guest - it will result in a timeout. I'm not very familiar with IPv6 routing / subnetting - any help or tips for further troubleshooting is welcome! My Setup: XenServer 6.2 IPv6: 2a01:4f8:200:xxxx::2/112 ip -6 route: 2a01:4f8:200:xxxx::/112 dev xenbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 default via fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 Guest 1 IPv6: 2a01:4f8:200:xxxx::3/64 IPv6: 2a01:4f8:200:xxxx::4/64 ip -6 route: 2a01:4f8:200:xxxx::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via fe80::1 dev eth0 metric 1 mtu 1500 advmss 1440 hoplimit 4294967295 Guest 2 IPv6: 2a01:4f8:200:xxxx::5/64 Guest 1 IPv6 is working fine, Guest 2 too. As suggested by the wiki article (1) i split my /64 network into a /112. Is it right to set the host /112 and the guests /64? Why is that?

    Read the article

  • Is it possible to change an "Unidentified Network" into a "Home" or "Work" network on Windows 7

    - by Rhys
    I have a problem with Windows 7 RC (7100). I frequently use a crossover network cable on WinXP with static IP addresses to connect to various industrial devices (e.g. robots, pumps, valves or even other Windows PCs) that have Ethernet network ports. When I do this on Windows 7, the network connection is classed as an "Unidentified Network" in Networks and Sharing Center and the public firewall profile is enforced by Windows. I do not want to change the public profile and would prefer to use the Home or Work profile instead. For other networks like Home and Work I'm able to click on them and change the classification. This is not available for unidentified networks. My questions are these:- Is there a way to manual override the "Unidentified Network" classification? What tests are performed on the network that fail, therefore classifying it as an "Unidentified Network" By googling (hitting mainly vista issues) it seems that you need to ensure that the default gateway is not 0.0.0.0. I've done this. I've also tried to remove IPv6 but this does not seem possible on Windows 7.

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Windows 7 mapped drive kicking off OS X users

    - by Collin White
    I've mapped a network drive on my Windows 7 PC at my office. The windows machine has a few TB of storage that is being accessed by my development team (all running mac os 10.7). The share seems to work fine for a little while but will timeout and kick the mac users off and sometimes disallows a connection on the next attempt. Restarting the windows machine fixes the problem. I've tried this tutorial as well as setting the maximum session length in the Local Security Policy section to 99999 (I discovered 0 did not mean unlimited, only a 'reasonable ammount of time') anyway, the setting is now for ~208 days which is sufficient (see attached). I'm having trouble debugging this in general so if anyone has some pointers I'm all ears. This is a intermittent issue which in my opinion are the hardest kinds to debug. If anyone knows of how I might monitor connections from the PC that would also be pretty cool. Previously the files were hosted on a mac mini and everything was working just fine (the mini just didn't have the ability for the storage capacity we needed) so I believe it is some windows setting that is kicking users off. Anyway, thanks for reading.

    Read the article

  • Can I use a Windows Server 2003 Domain Controller but my home router for DNS?

    - by NetworkingWannabie
    Hi All Probably easiest to start with a description of my current setup, which works (oh, and this is a home setup not an office or anything): I have an ADSL modem with a static IP address (192.168.128.1), and its DHCP capability is disabled. I have a permanently powered up Windows Server 2003 machine with a fixed IP (192.168.128.2) which provides my domain controller, dhcp, and dns. The default gateway for everything is my ADSL modem everything is setup to use the WS2003 machine as the primary DNS with the ADSL modem as Secondary DNS just in case the server goes down (everything includes the server itself). Lastly, just in case it's relevant, I have my DHCP leases set to infinite (or whatever the right term is). Everything is pretty hunky dory. Except, that is, for the fact that my server is ALWAYS on, and it isn't always used, so I'm burning juice that I don't need to - my server burns around 120W which isn't immense but isn't irrelevant either, so I'd like to put it into a stand-by state when it isn't being used (the more standby the better) and then get the clients to wake it up. Am I correct in assuming that this won't work at the moment - A given client would need an IP address to wake the machine up, and it needs to machine to be awake to get an IP - catch 22? Assuming I'm correct, can I move to using my router (which is always on) for DHCP? What impact will this have on DC and DNS? Alternatively, does anyone have a better way for me to achieve this? Can I get the server to wake up when it sees clients look for a DHCP server, etc? Wow, that came out longer than expected! Thanks for your help.

    Read the article

  • Windows 2012 RDS Temporary profile for Administrator

    - by Fabio
    I've configured a Windows 2012 RDS Farm with two virtual servers (VMWare - each one on a different ESX server). Both servers have Licensing, Web Access, Gateway, Connection Broker and Session Host roles. High Availability is set up and it works fine. Remote Apps are working and even Windows XP clients have access to the web interface. User profile path is \vmfiles1\UserProfileDisks\App\ and almost everyone has full right access to it. The problem I have is that I would like to be able to access both servers at the same time with the Administrator account (console), but each time I try, the second server that I logon to give me access with a temporary profile. I tried to enable/disable multiple sessions per user and forced Admin logoff with the GPO but nothing changed. Another thing is that the server pool is not saved, so each time I restart the RDS server or I logoff from it, I have to add a server in the server manager. Do you have any idea? Sorry if my english is not perfect.

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • Change the order of IP addresses returned by ifconfig?

    - by erikcw
    I have an Ubuntu server with several IP addresses attached to it. 127.0.0.1 is listed as venet0 by ifconfig. I'm using Chef to configure the server. The problem is that chef is listing 127.0.0.1 as the IP address for the server instead of one of the server's "real" IPs. (apparent "ohai ipaddress" uses the first IP listed by ifconfig to determine the server's IP). How can I change the order so the servers main IP is listed first instead of the 127.0.0.1? Can venet0 be deleted and venet0:0 be "promoted" to take its place since 127.0.0.1 is already listed in the "lo" interface? lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:334 errors:0 dropped:0 overruns:0 frame:0 TX packets:334 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:16700 (16.7 KB) TX bytes:16700 (16.7 KB) venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:127.0.0.1 P-t-P:127.0.0.1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:7622207 errors:0 dropped:0 overruns:0 frame:0 TX packets:8183436 errors:0 dropped:1 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2102750761 (2.1 GB) TX bytes:2795213667 (2.7 GB) venet0:0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX1 P-t-P:XXX.XXX.XXX.XX1 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 venet0:1 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:XXX.XXX.XXX.XX2 P-t-P:XXX.XXX.XXX.XX2 Bcast:0.0.0.0 Mask:255.255.255.255 UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 route -n route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.0.2.1 0.0.0.0 255.255.255.255 UH 0 0 0 venet0 0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 venet0

    Read the article

  • Apache Server Redirect Subdomain to Port

    - by Matt Clark
    I am trying to setup my server with a Minecraft server on a non-standard port with a subdomain redirect, which when navigated to by minecraft will go to its correct port, or if navigated to by a web browser will show a web-page. i.e.: **Minecraft** minecraft.example.com:25565 -> example.com:25465 **Web Browser** minecraft.example.com:80 -> Displays HTML Page I am attempting to do this by using the following VirtualHosts in Apache: Listen 25565 <VirtualHost *:80> ServerAdmin [email protected] ServerName minecraft.example.com DocumentRoot /var/www/example.com/minecraft <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/example.com/minecraft/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> <VirtualHost *:25565> ServerAdmin [email protected] ServerName minecraft.example.com ProxyPass / http://localhost:25465 retry=1 acquire=3000 timeout=6$ ProxyPassReverse / http://localhost:25465 </VirtualHost> Running this configuration when I browse to minecraft.example.com I am able to see the files in the /var/www/example.com/minecraft/ folder, however if I try and connect in minecraft I get an exception, and in the browser I get a page with the following information: minecraft.example.com:25565 -> Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /. Reason: Error reading from remote server Could anybody share some insight on what I may be doing wrong and what the best possible solution would be to fix this? Thanks.

    Read the article

  • Need help using a super scope

    - by Vdub
    I have a windows server 2008 r2 standard running our DCHP, DNS, and AD. also I have (3) HP Pro Curve 2510-G switches (J9280A). Right now our LAN is set up 192.168.50.2-192.168.50.254 on our sub-net (A) and another scope with 192.168.51.2-192.168.51.254 sub-net (B) both have sub-net mask of 255.255.255.0. The same server is our DNS which is 192.168.50.242 and our firewall (watchguard) is the gateway at 192.168.50.1. Right now the sub-net (B) does not have DHCP active so only sub-net (A) is giving a pool. My problem is that we are trying to have open WiFi on our network and i am assuming that i can use the sub-net (B) for that if i activate it and use sub-net (A) for our staff only. I have noticed that when i set up a static on a client pc and set it to 192.168.51.x i cannot use the DNS of 192.168.50.242 however i can use 8.8.8.8 and it works fine, i am guessing that because it is on a different sub-net? Forgive me as i am very new at this and dont know a lot. Is there easy way with the equipment i have to a accommodate wifi for hundreds of people without causing problems for our staff? (multiple same IP address assigns) I appreciate any and all info!

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >