Search Results

Search found 14951 results on 599 pages for 'connect by'.

Page 477/599 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • Authenticating Active Directory Users to Mac OS X Mavericks Server L2TP VPN Service

    - by dean
    We have a Windows Server 2012 Active Directory Infrastructure that consists of two domain controllers. Bound to the Active Directory Domain is a Mac OS X Mavericks Server 10.9.3. The server runs Profile Manager and VPN Services. My Active Directory users are able to authenticate to the Profile Manager, but not the VPN. I have found several threads on other forums of other users reporting similar issues, here is just one of many references: https://discussions.apple.com/thread/5174619 It appears as though the issue is related to a CHAP authentication failure. Can anyone suggest what next troubleshooting steps I might take? Is there a way to liberalize the authentication mechanism to include MSCHAP? Here is an excerpt of the transaction from the logs. Please note the domain has been changed to example.com. Jun 6 15:25:03 profile-manager.example.com vpnd[10317]: Incoming call... Address given to client = 192.168.55.217 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: publish_entry SCDSet() failed: Success! Jun 6 15:25:03 --- last message repeated 2 times --- Jun 6 15:25:03 profile-manager.example.com pppd[10677]: pppd 2.4.2 (Apple version 727.90.1) started by root, uid 0 Jun 6 15:25:03 profile-manager.example.com pppd[10677]: L2TP incoming call in progress from '108.46.112.181'... Jun 6 15:25:03 profile-manager.example.com racoon[257]: pfkey DELETE received: ESP 192.168.55.12[4500]->108.46.112.181[4500] spi=25137226(0x17f904a) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP connection established. Jun 6 15:25:04 profile-manager kernel[0]: ppp0: is now delegating en0 (type 0x6, family 2, sub-family 0) Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connect: ppp0 <--> socket[34:18] Jun 6 15:25:04 profile-manager.example.com pppd[10677]: CHAP peer authentication failed for alex Jun 6 15:25:04 profile-manager.example.com pppd[10677]: Connection terminated. Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnecting... Jun 6 15:25:04 profile-manager.example.com pppd[10677]: L2TP disconnected Jun 6 15:25:04 profile-manager.example.com vpnd[10317]: --> Client with address = 192.168.55.217 has hung up

    Read the article

  • USB To Serial under OpenSuse 11.3

    - by Exsisto
    I have a LogiLink USB-To-Serial adapter. This has the PL2303 chip inside. When I insert the device: [26064.927083] usb 7-1: new full speed USB device using uhci_hcd and address 9 [26065.076090] usb 7-1: New USB device found, idVendor=067b, idProduct=2303 [26065.076099] usb 7-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 [26065.076105] usb 7-1: Product: USB-Serial Controller [26065.076110] usb 7-1: Manufacturer: Prolific Technology Inc. [26065.079181] pl2303 7-1:1.0: pl2303 converter detected [26065.091296] usb 7-1: pl2303 converter now attached to ttyUSB0 So the device is recognized and the converter is attached to ttyUSB0. When I do screen /dev/ttyUSB0 9600 I get the error: bash: /dev/ttyUSB0: Permission denied So I went looking in the file permissions. ls -l from the /dev folder reports: crw-rw---- 1 root dialout 188, 0 2011-04-26 15:47 ttyUSB0 I added my user lars to the dialout group. When I use the commands groups under lars it shows that I'm in the group. Though I still recieve the permissions denied error, as lars, and as root. I'm trying to connect to a console cable to configure some Cisco switches. My OS is OpenSuse 11.3 x86_64 with kernel version 2.6.34.7-0.7-desktop.

    Read the article

  • Jboss unreachable/ slow behind apache with ajp

    - by Niels
    I have an linux server running with a JBoss Instance with apache2. Apache2 will use AJP connection to reverse proxy to JBoss. I found these messages in the apache error.log: [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header [error] ajp_read_header: ajp_ilink_receive failed [error] (120006)APR does not understand this error code: proxy: read response failed from 8.8.8.8:8009 (hostname) [error] (111)Connection refused: proxy: AJP: attempt to connect to 8.8.8.8:8009 (hostname) failed [error] ap_proxy_connect_backend disabling worker for (hostname) [error] proxy: AJP: failed to make connection to backend: hostname [error] proxy: AJP: disabled connection for (hostname)25 I googled around but I can't seem to find any related topics. There are people say this behavior can be caused by misconfigured apache vs jboss. Telling the max amount of connections apache allows are far greater then jboss, causing the apache connection to time out. But I know the app isn't used by thousands of simultaneous connections at the time not even hundreds of connections so I don't believe this could be a cause. Does anybody have an idea? Or could tell me how to debug this problem? I'm using these versions: Debian 4.3.5-4 64Bit Apache Version 2.2.16 JBOSS Version 4.2.3.GA Thanks

    Read the article

  • Installing MySQL 5.1 on OS X 10.7 Lion

    - by xisal
    I am trying to install MySQL 5.1. I am on Lion, and when I remove all files associated with MySQL on my machine it still tells me that I have a newer version installed when I try to install it from the DMG file. Has anyone successfully installed MySQL 5.1 on Lion? I found a solution using Homebrew: Completely remove MySQL from your system (just in case) sudo rm /usr/local/mysql sudo rm -rf /usr/local/mysql* sudo rm -rf /Library/StartupItems/MySQLCOM sudo rm -rf /Library/PreferencePanes/My* vim /etc/hostconfig and removed the line MYSQLCOM=-YES- rm -rf ~/Library/PreferencePanes/My* sudo rm -rf /Library/Receipts/mysql* sudo rm -rf /Library/Receipts/MySQL* sudo rm -rf /var/db/receipts/com.mysql.* Source:http://stackoverflow.com/questions/1436425/how-do-you-uninstall-mysql-from-mac-os-x Install homebrew /usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)" Source: https://github.com/mxcl/homebrew/wiki/installation Install MySQL 5.1 via brew brew install mysql51 if that doesn't work, do this: brew install https://raw.github.com/adamv/homebrew-alt/master/versions/mysql51.rb Source: http://stackoverflow.com/questions/4359131/brew-install-mysql-on-mac-os/6399627#6399627 Make MySQL Work Create mysql.sock file touch /tmp/mysql.sock Install MySQL default tables /usr/local/Cellar/mysql51/5.1.58/bin/mysql_install_db ...or your path Source: http://stackoverflow.com/questions/4788381/getting-cant-connect-through-socket-tmp-mysql-when-installing-mysql-on-ma/5140849#5140849

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • Error 53 - The network path was not found.

    - by Jack
    I have a machine in my Active Directory Domain that I can no longer "net view" from other machines in the domain. This is a Windows XP Pro machine. It is hosting a VMWare virtual of my Domain Controller. If I attempt to net view [machine name] I get system error 53, The network path was not found. This is not a DNS issue, the same thing happens with the machine's IP. I don't think it's a firewall issue, I turned the firewall off on this machine. As I mentioned, it has worked in the past, and then stopped for no reason that I can see. I (intentionally) didn't change the software. I CAN get to the VMs hosted on this machine, can connect to their shares, net view them, etc. All other machines can see each other. In fact, the problem machine can see other machines and access their shares just fine. I tried removing the machine from the domain and re-adding it. I tried deleting the shares and recreating them. Not sure how to troubleshoot this any further. Any ideas?

    Read the article

  • Win Server 2k and Win 7 client

    - by Ray Kruse
    I have a Win Server 2000 system with AD configured. The network consists of an OKI printer, a network server, a wifi router a Win 2k client and the server. I'm trying to connect a Win 7 client. The purpose of the network, besides sharing equipment is to move files from client to client and scatter backups over more than 1 machine. The Win 7 client is configured for DHCP and does in fact receive it's IP and DNS configuration from the server and it sees the printer, wifi router and network drive, but does not see the Win 2k client nor the Win 2k server. I have tried the LAN Management Authentication Level set to 'Send LM & NTLM responses' with the 128 bit encryption removed. I've also done the registry hack on the key 'LmCompatibilityLevel'. Neither of these have helped. I have two questions: Is there a fix or is Win 2k totally incompatible? Is the best (or quickest/cheapest) fix to upgrade the server to Win 2k3 and not worry about the Win 2k client? Thanks for any help. Ray Kruse Buffalo, KY

    Read the article

  • Godaddy domain and Bluehost web hosting

    - by Digital site
    I have a domain from Godaddy and web hosting at Bluehost. I want to make this work as some people say no need to transfer the domain from Godaddy to Bluehost. I was trying to find out how to get this work out by adding name servers for Bluehost ns1.bluehost.com ns2.bluehost.com at Godaddy. This works fine, but not sure if 100% OK yet. The reason why I say that is when I type in my address name on any browser this way: mydomain.com it doesn't work. Instead I get an error message stating that this server is not found or couldn't connect to it... However, when I write the domain name and include the www. prefix it works fine... The other problem is when I search in google or yahoo, the domain shows like this: mydomain.com , which is not really good because my clients think my site is down because of the error message, and most new people don't know if they have to add www. to the domain to work. I just want to make at least the domain works like this: mydomain.com

    Read the article

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • why adding router will hide all share folders

    - by user1285419
    I have several computers running winxp installed in my office, they are all connecting to the WAN providing by the building (wall socket) (DHCP, mask 255.255.252.0). I setup a shared folder in my computer so all other computer in the same group could access it. This configuration have been using for long time. Recently, I am trying to setup a router. I have the WAN port of the router go to the wall socket, connect the NIC to the LAN port of the router, setup the router in DHCP mode (192.168.0.100/255.255.255.0 to 192.168.0.110 /255.255.255.0), I turn off all the firewall (windows one and router's builtin one), the NIC has ip set as DHCP. If I ipconfig/all, I see that the NIC was assigned ip 192.168.0.100. I can access the internal, email whatever. However, the shared folder can no longer be accessed by other computers in the same group. I think it is the problem of ip. But what's really weird is if I turn off the DHCP function in the router, ipconfig/all always give 0.0.0.0/255.255.255.255 and I cannot access the internet. I have no idea what's going on. Anyone know how to fix it and allow the shared folder in application of router? Thanks.

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Linux: Force fsck of a read-only mounted filesystem?

    - by Timothy Miller
    I'm developing for a headless embedded appliance, running CentOS 6.2. The user can connect a keyboard, but not a monitor, and a serial console would require opening the case, something we don't want the user to have to do. This all pretty much obviates the possibility of using a recovery USB drive to boot from, unless all it does is blindly reimage the harddrive. I would like to provide some recovery facilities, and I have written a tool that comes up on /dev/tty1 in place of getty to provide these functions. One such function is fsck. I have found out how to remount the root and other file systems read-only. Now that they are read-only, it should be safe to fsck them and then reboot. Unfortunately, fsck complains to me that the filesystems are mounted and refuses to do anything. How can I force fsck to run on a read-only mounted partition? Based on my research, this is going to have to be something obscure. "-f" just means to force repair of a clean (but unmounted) partition. I need to repair a clean or unclean mounted partition. From what I read, this is something "only experts" should do, but no one has bothered to explain how the experts do it. I'm hoping someone can reveal this to me. BTW, I've noticed that e2fsck 1.42.4 on Gentoo will let you fsck a mounted partition, even mounted read-write, but it seems only to do so if fsck is run from a terminal, so it can ask the user if they're sure they want to do something so dangerous. I'm not sure if the CentOS version does the same thing, but it appears that fsck CAN repair a mounted partition, but it flatly refuses to when not run from a terminal. One last-resort option is for me to compile my own hacked fsck. But I'm afraid I'll mess it up in some unexpected way. Thanks! Note: Originally posted here.

    Read the article

  • Changing IP address in IIS for SharePoint site results in Directory listing error

    - by Dan
    I have a server here that has 2 roles. One is Exchange 2007 and the other is MOSS 2007. In IIS i have a site, go.domain.com which has our OWA. The other is internal.domain.com which is the MOSS site. I have given the NIC local IPs and each site is using host headers. The GO site has an SSL cert from NetSol, and the MOSS site has a self signed. Right now going to either shows the NetSol site, which browsers complain about when going to the internal.domain.com site, obviously, since they are on the same IP in IIS. Both sites have always run off the original IP of 10.0.0.3 in IIS. When i added the second IP to the NIC, (10.0.0.6) and changed the Sharepoint site in IIS to use this for http and https access, I now get this message in a browser when trying to connect. Directory Listing Denied This Virtual Directory does not allow contents to be listed. Changing the IP back to 10.0.0.3 and the internal site is back up. What am I missing here? Do i need to fool around with Alternate Access Mappings in Central Admin? Am i completely missing the point with multiple SSL certs and host headers?

    Read the article

  • SocketException (Timeout) only when running as scheduled task

    - by BVartin
    I'm running a C# web-scrapper application (that I wrote) on a Windows Server 2003 instance under a user belonging to the local Administrator group. When I run it within a desktop/remote-desktop session the application runs successfully but when I schedule it to run under the same user/security-context outside of the desktop session, all socket connections timeout. The scheduled task calls a batch file which in-turn calls the application. The Windows Server 2003 instance has a very basic configuration and isn't even connected to a domain. I cannot find anything in any firewall or security configuration which is preventing this but maybe I have overlooked something, can anyone be of any assistance? System.Net.WebException: Unable to connect to the remote server --- System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond X.X.X.X:443 at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress) at System.Net.ServicePoint.ConnectSocketInternal(Boolean connectFailure, Socket s4, Socket s6, Socket& socket, IPAddress& address, ConnectSocketState state, IAsyncResult asyncResult, Int32 timeout, Exception& exception) --- End of inner exception stack trace --- at System.Net.HttpWebRequest.GetResponse()

    Read the article

  • PPTP server connection closes - Too much data?

    - by Sebastian Hoitz
    I set up a PPTP server for my company. However, every time I have another computer connected to this server (i.e. our backup server) and a lot of data gets transferred, the connection to this computer closes. In the syslog on the PPTP server I find this message: Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Reaping child PPP[2583] Apr 22 12:44:34 komola-chase pppd[2583]: MPPE disabled Apr 22 12:44:34 komola-chase pppd[2583]: Connection terminated. Apr 22 12:44:34 komola-chase pppd[2583]: Exit. Apr 22 12:44:34 komola-chase pptpd[2581]: CTRL: Client 192.168.0.11 control connection finished Apr 22 12:55:11 komola-chase pptpd[2674]: GRE: xmit failed from decaps_hdlc: No buffer space available Apr 22 12:55:11 komola-chase pptpd[2674]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Apr 22 12:55:11 komola-chase pppd[2675]: Modem hangup Apr 22 12:55:11 komola-chase pppd[2675]: Connect time 23.0 minutes. Hopefully you can help me as to what is wrong. As far as I can tell, there is no compression enabled on the PPTP server (npbsdcomp option). Thank you!

    Read the article

  • How this could happen to my ftp server?

    - by srisar
    hi, again me, i just dont get why i m getting this message, when i try to connect my ftp server thorugh my wan address (112.135.26.115) its saying me, 530 user access denied but when i give the same data to http://ftptest.net the result is as follows... Status: Resolving address of 112.135.26.115 Status: Connecting to 112.135.26.115 Status: Connected, waiting for welcome message Reply: 220-FileZilla Server version 0.9.34 beta Reply: 220-written by Tim Kosse ([email protected]) Reply: 220 Please visit http://sourceforge.net/projects/filezilla/ Status: CLNT http://ftptest.net on behalf of 112.135.26.115 Reply: 200 Don't care Status: USER saravana Reply: 331 Password required for saravana Status: PASS ********* Reply: 230 Logged on Status: SYST Reply: 215 UNIX emulated by FileZilla Status: FEAT Reply: 211-Features: Reply: MDTM Reply: REST STREAM Reply: SIZE Reply: MLST type*;size*;modify*; Reply: MLSD Reply: AUTH SSL Reply: AUTH TLS Reply: UTF8 Reply: CLNT Reply: MFMT Reply: 211 End Status: PWD Reply: 257 "/" is current directory. Status: Current path is / Status: TYPE I Reply: 200 Type set to I Status: PASV Reply: 227 Entering Passive Mode (112,135,26,115,43,9) Status: MLSD Reply: 150 Connection accepted Listing: type=dir;modify=20100322113235; it_is_working!_Andrejs_Cainikovs_from_serverfault.com Listing: type=file;modify=20100322110559;size=5; New Text Document.txt Reply: 226 Transfer OK Status: Success can anyone say why this happens to me? please im trying the whole day!!

    Read the article

  • Apache no longer starts at Windows boot up

    - by w3d
    I have Apache installed as part of XAMPP - local test server. It is configured as a Windows (XP) Service. Startup type is "Automatic". For a long time now it has always started when Windows boots up, but recently this has stopped happening. I now need to start it manually via the XAMPP Control Panel - at which point it appears to start up perfectly OK. The only recent updates to the machine (that I recall) are Windows Updates - none of which appear to have "known issues" that relate to this. And updates to Google Chrome. Any ideas what could prevent Apache from starting automatically at Windows (XP) boot up? EDIT#1 There are 2 related Errors in my system event log regarding the Service Control Manager: Timeout (30000 milliseconds) waiting for the Apache2.2 service to connect. The Apache2.2 service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. When I manually start the Apache server after boot up there are 2 "information" events stating that it was "sent a start control" and that it "entered the running state". Although I notice it appears to take 19 seconds between the start control being sent and entering a running state - according to the event log. So, maybe 30 seconds during boot up isn't long enough (anymore) for Apache to start??

    Read the article

  • Wireless router blocking some sites while using ethernet is fine

    - by Micke
    I'm using Windows 7 and my router is a wireless Apple Airport Express that is approximately two years old. Suddenly I can't access some sites (for example www.sthlm.friskissvettis.se, or www.vegetarian-shoes.co.uk, some streamed tv-shows on svtplay.se, and a number of other random sites) when connecting to internet with my router. It worked good until recently and I'm fairly sure this problem emerged when my ISP upgraded from 10/10mbit to 100/10mbit speed. Most other sites like facebook and google works fine. When using my network cable to connect to internet everything works fine and I can access these sites. Firmware is current and I've tried reseting the router to factory defaults. Tried different browsers, and I can't ping the "blocked" sites either. Tracert www.sthlm.friskissvettis.se starts with 10.0.0.1 and continues through a number of long addresses until it says timeout. The last working address before timeout was sth-tcy-ipcore01-ge-0-2-0.neq.dgcsystems.net [83.241.252.13], if it matters. Tracert www.vegetarian-shoes.co.uk also eventually gives me a timeout. When the network cable is plugged in, I still get timeout on tracert www.sthlm.friskissvettis.se even though I can access the site in Chrome. Weird. www.vegetarian-shoes.co.uk doesn't give me a tracert timeout when the cable is plugged in, and I can access the site as usual. I've tried changing DNS servers to use opendns servers instead, but to no use. I've tried pinging these two sites with a lower MTU packet size (with this method: http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-or-windows-7/), but still can't access them through ping... I don't know what to do anymore.... any suggestions???

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • LCD monitor flicker when connected to a laptop using VGA

    - by Björn Lindqvist
    I have a dual screen setup with two AOC e2450Sw monitors connected to a laptop. The laptop has one HDMI and one VGA output. When one of the monitors is connected using VGA, it flickers or displays static noise. The flickering is fairly subtle and only visible on darker colors. But it is there and noticable and appears like horizontal lines. The problem only appears on the monitor connected to the laptop using the VGA cable. If I swap the monitors, the one connected using VGA is displaying the flicker but not the one connected using HDMI. The simple solution would ofcourse be to connect both monitors using HDMI, but since the laptop only has one VGA and one HDMI out that isn't possible. I've tried tweaking the monitor setting using the OSD menu, but it had little or no effect. Update: After several more trouble shooting hours, it seems the problem is not related to the monitor or VGA cable as the problem persists even if I swap the display with another brand and different cables. So it may be the graphics card? Intel HD Graphics 4000. The laptop is Acer Aspire E1-571.

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • How to check DVD region settings in Window 7

    - by jmatthias
    I have installed Windows 7 on my laptop. When I put a movie in the DVD drive, the video player starts up and tells me 'THIS DISC IS NOT FORMATTED TO PLAY IN THIS REGION'. I have changed the DVD region a few times in the past but while I was in Windows XP I changed the region in Region 1. I cannot find how to check the DVD region setting in Windows 7 (it's not located on the hard properties page of the DVD anymore. Does anybody know how to check the DVD region in Windows 7? Update: If I connect an external USB DVD drive I can see the DVD region tab on the hardware properties dialog. I guess there is some compatibility problem with my internal DVD drive and Windows 7 (as I said I was able to inspect/change the DVD region in Windows XP). Solution: I think I had used LtnRPC in the past to remove the region from my DVD drive. It looks like Windows 7 does not like region free DVD drives (at least mine anyway). I was able to use LtnRPC to reset the region back to 1. I can now see the region tab on the DVD drives hardware properties.

    Read the article

  • rsync not copying hard links

    - by A.Ellett
    I have two computers (both MacBook Airs) for which I sync one directory tree in both, but not the entire hard drive or any other directories. Let's say on computer A the directory is /Users/aellett/projects Let's say on computer B the directory is /Users/bellett/projects Generally, I'll log into computer B and then remotely connect to computer A as user 'aellett'. As super user I sync the two project directories as follows: rsync -av /Volumes/aellett/projects/ /Users/bellett/projects/ and this works as expected. On both computers I have another file letter.txt in a different directory which is not getting synced. Let's say on computer A the file is found in /Users/aellett/letters On computer B the file is found in /Users/bellett/correspondence Generally, I don't want to share what's not included in /Users/<username>/projects. But I do want to share this particular file. So on both computer I made a correspondence directory in projects. And then I made hard links as follows On computer A: ln /Users/aellett/letters/letter.txt /Users/aellett/projects/correspondence/letter.txt On computer B: ln /Users/bellett/correspondence/letter.txt /Users/aellett/projects/correspondence/letter.txt The next time I synced the two computers I did the following rsync -av -H /Volumes/aellett/projects/ /Users/bellett/projects/ When I checked on computer B, /Users/bellett/projects/correspondence/letter.txt was correctly synced. But, the hardlink to /Users/bellett/correspondence/letter.txt was no longer there. In other words, /Users/bellett/projects/correspondence/letter.txt was identical to /Users/aellett/projects/correspondence/letter.txt but it differed from /Users/bellett/correspondence/letter.txt. Since these two files were hard linked on both computers, I expected them to still have the hard link. Why are my hard links not being preserved?

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >