Search Results

Search found 20336 results on 814 pages for 'connection strings'.

Page 691/814 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Is it possible to have DisplayLink USB display hotplugging with Xorg 1.13 on kernel 3.4?

    - by lkraav
    keithp seems to be the only one on the interwebs to have written anything about the subject and he worked with 3.5_rc. I don't want to go above 3.4 at the moment for various stability reasons and am trying to see whether I can get this to work. Xorg 1.13 recognizes the display on connection, "udl" module is loaded, xorg-video-modesetting driver also loads, display lights up. So everything seems to be good. I emerged xrandr-9999 (not many changes on top of 1.3.5): $ xrandr --listproviders Providers: number : 2 Provider 0: id: 69 cap: 0x0 crtcs: 2 outputs: 4 associated providers: 0 name:Intel Provider 1: id: 338 cap: 0x0 crtcs: 1 outputs: 1 associated providers: 0 name:modesetting But I can't get any further, just like this guy: $ xrandr --setprovideroutputsource 338 69 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Value in failed request: 0x152 Serial number of failed request: 11 Current serial number in output stream: 12 $ xrandr --setprovideroutputsource 1 0 X Error of failed request: 148 Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Serial number of failed request: 11 Current serial number in output stream: 12 Any thoughts?

    Read the article

  • When would a persistent route not be an active route?

    - by alnorth29
    I've added a persistent route to our Windows Server 2003 box using "route -p add". After a reboot the "route print" gave this: Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.91.131.1 10.91.131.9 20 10.88.0.0 255.255.255.252 10.88.0.1 10.88.0.1 30 10.88.0.1 255.255.255.255 127.0.0.1 127.0.0.1 30 10.91.131.0 255.255.255.0 10.91.131.9 10.91.131.9 20 10.91.131.9 255.255.255.255 127.0.0.1 127.0.0.1 20 10.255.255.255 255.255.255.255 10.88.0.1 10.88.0.1 30 10.255.255.255 255.255.255.255 10.91.131.9 10.91.131.9 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 240.0.0.0 10.88.0.1 10.88.0.1 30 224.0.0.0 240.0.0.0 10.91.131.9 10.91.131.9 20 255.255.255.255 255.255.255.255 10.88.0.1 10.88.0.1 1 255.255.255.255 255.255.255.255 10.91.131.9 10.91.131.9 1 Default Gateway: 10.91.131.1 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 10.88.0.0 255.255.255.0 10.88.0.2 1 The route I added is listed as a persistent route, but not an active one. Why might this be the case? The route in question is for an OpenVPN connection, would that have anything to do with it?

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • How do I install mod_dav_svn module on an Apache / MAMP server?

    - by fettereddingoskidney
    How do I install additional modules into my server configuration? Currently all of the other modules are installed in /Applications/MAMP/Library/modules...and I see that they are mod_*.so source files, but I cannot seem to get mine to end up here... :? I am trying to set up an SVN repository and use my Apache (MAMP) server to serve the repository. I am using the subversion installation that came (pre-installed?) on Mac OS X 10.5. The repository is working, but I cannot access it remotely through my MAMP server using a client program (Dreamweaver CS5). When I try, I get an error from Dreamweaver, saying: Cannot connect to host xxx: Connection refused. This, I believe, is because I have not properly configured my Apache server to serve the svn repository. So, I added the following lines to my httpd.conf file: <Location /subversion> DAV svn SVNPath /Applications/MAMP/htdocs/svn/ AuthType Basic AuthName "Subversion repository" AuthUserFile /applications/mamp/htdocs/.htpasswd Require ServerAdmin </Location> Restarted the server with the command $ /Applications/MAMP/Library/bin/apachectl -k restart I used this path because otherwise the default apachectl path is set to /usr/sbin/apachectl, which is the location of the pre-installed command on Mac OS X, since the OS comes packaged with a built-in Apache server. And I get the error: Syntax error on line 1153 of /Applications/MAMP/conf/apache/httpd.conf: Unknown DAV provider: svn I checked the upper portion of httpd.conf and see that dav_module (mod_dav.so) is loaded and is in fact in my the modules directory of my server. However, mod_dav_svn is not installed in that directory nor is it in the LoadModule portion of httpd.conf. So I need to install it, right? I have tried installing modules into my MAMP server before but was never successful...because I don't know how to do it. Can someone please walk me through how to install that module? Thanks for your time!

    Read the article

  • How to create a static IP on Windows Server 2008 R2 so I can access the server remotely

    - by Aesir
    I have just purchased a HP Proliant N40L which I am intending to use as a NAS, learning tool and just in general something to mess around with. As a student via the Microsoft dreamspark program I can get a free copy of Windows Server 2008 R2 which I am using as the OS. So that I can remote to the box from outside of my local network and so that I can stream media from it to my PS3, I have read that I need to create a static IP for the server and use port forwarding to forward to this IP so I can remote in. Is this correct? I am not really sure how to do this and if I need to make these changes on my router configuration, on the OS or both. I am a novice when it comes to networking however most resources for Windows server 2008 R2 seem to assume a fair amount of experience already. I realise that using this particular OS may seem like overkill for what I currently wish to do with it (stream content to other devices and backup) but as I can get a copy for free it seems sensible. Edit: From reading answers posted I feel I should give more information. I have now tried to add a static IP address using my router configuration settings. I have used the getmac command to get the mac address of the server. My ISP is Virgin Media and I have gone to the LAN IP section and I have added an IP address to the DHCP Reservation Lease Info. I can now use remote desktop connection internally to remote to the server (so I am assuming assigning this IP has worked). How do I configure this on the OS as well? I am also unsure on how I would remote to this machine outside of my local network?

    Read the article

  • Problems installing GIT on Ubuntu through SSH

    - by jamadri
    I'm having trouble installing git using this command: sudo apt-get install git-core It's giving me the problems below and I'm not quite sure how to get this to work correctly. I try running sudo apt-get update and after it just gives me problems. If anyone knows how to solve this or a possible way of getting GIT on your machine differently it would be of much help. I've never had a problem with using apt-get. Do you want to continue [Y/n]? y WARNING: The following packages cannot be authenticated! liberror-perl git-core patch Install these packages without verification [y/N]? y Err http://us.archive.ubuntu.com jaunty/main git-core 1:1.6.0.4-1ubuntu2 404 Not Found [IP: 91.189.92.183 80] Err http://us.archive.ubuntu.com jaunty/main patch 2.5.9-5 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/g/git-core/git-core_1.6.0.4- 1ubuntu2_amd64.deb 404 Not Found [IP: 91.189.92.183 80] Failed to fetch http://us.archive.ubuntu.com/ubuntu/pool/main/p/patch/patch_2.5.9- 5_amd64.deb 404 Not Found [IP: 91.189.92.183 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Anything reply that can help fix this would be helpful. I'm not sure if it's the git servers or my connection that might be the problem. I've used apt-get to pull other things, it's just failing with git.

    Read the article

  • Setting up port forwarding for web server

    - by reyjavikvi
    This could belong on Super User, but I thought this place was more appropiate. I want to run Apache in my computer and want to make it available to the outside world to test a couple things. Apparently, I have to go into my router's (a TP-LINK TD 8910G) settings and forward port 80 to my PC's IP. So far so good. Thing is, since the router uses a web based interface and it's kind of stupid, it told me that since I was using port 80 for this, I should access its settings through port 8080. Maybe it can't detect requests coming from the LAN, I don't know. Point is, now neither port can't access the configuration, and I can't access Internet. Specifically, trying to access anything (including 192.168.1.1, the router's settings) through port 80 turns up a blank page (maybe if I had the server running in my computer I'd get something, but I don't want to risk trying, I had to reset the router and restore the settings), and port 8080 gives a "Can't establish connection" error in Firefox (and similar ones in other browsers). Is there a way to configure the router to not redirect requests coming from inside the network? I'm a beginner with this stuff, so please try to explain in a simple way. If this is more appropiate in Super User, I'm sorry.

    Read the article

  • NAS is intermittently inaccessible

    - by Natalie
    Model: QNAP TS-410 Turbo NAS Firmware version: 3.2.5 Build 0409T Issue: Each day, users connect to share folders on the NAS system and have read/write permissions for the share folders to which they need access. However, it often asks them for their log-in details and - when provided with right (or wrong) credentials for a user with read/write permissions - it denies them access. I've checked the logs and I keep seeing the following warnings: 2011-11-23 16:26:29 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:26:16 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:25:30 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:25:15 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:24:33 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:24:21 System 127.0.0.1 localhost Re-launch process [proftpd]. 2011-11-23 16:23:37 System 127.0.0.1 localhost Re-launch process [rpc.mountd]. 2011-11-23 16:23:25 System 127.0.0.1 localhost Re-launch process [proftpd]. They seem to occur per minute but I am uncertain about whether or not they are relevant to this issue. The "Login failed" warning has also displayed in the system connection logs which tells me when and which user was unable to log in, as shown below: 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:07 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 16:11:06 Administrator 192.168.0.xx computer-01 SAMBA --- Login Fail 2011-11-22 13:46:14 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-22 13:46:09 administrator 192.168.0.yy --- HTTP Administration Login Fail 2011-11-21 15:17:22 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:18 user 192.168.0.zz computer-02 SAMBA --- Login Fail 2011-11-21 15:17:17 user 192.168.0.zz computer-02 SAMBA --- Login Fail I've researched this on Google and the QNAP forums and have not come up with a resolution as yet.

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • VNC from Windows to OS X Lion: App stuck in fullscreen mode

    - by Jonny
    I'm connecting to a remote Mac through a Windows. ahh it gets more complicated than that. I'm sitting by my iMac. I use Virtual Box in it to launch Windows 7. In it I have a VPN connection to a remote Windows network, which allows me to use Remote Desktop to one of the Windows (Vista!) boxes over there. From that Vista box I VNC into a Mac OS X Lion. (Don't ask me why, but that Mac doesn't have a public ip which prevents me from accessing it in the first place.) So: OSXLion - (virtual)Windows7 - Windows Vista - OSX Lion That last Mac was recently upgraded from Snow Leopard. Now with Lion, sometimes apps run in fullscreen. Somehow I can't get out of that fullscreen. Normally you'd move the mouse pointer to the top of screen and a menu list bar drops down allowing you to reach the fullscreen button top right. Now, in my current setup that menu list bar never drops down on the remote Mac at the end of the line. Any ideas?

    Read the article

  • How to make Virtualbox, OpenVPN, and Win2008 Web R2 like one another?

    - by Aquitaine
    Back with web developer guy wearing net admin hat. Hopefully this is an easy one. We have two servers on a public network at a hosted facility. Server A is our public-facing web server and server B is our database server. Both are running Windows 2008 Server R2 Web Edition. We want Server B isolated from everything except Server A, such that anyone who has to connect to server B goes through the VPN on Server A. It's not perfect since we have no access to do this on the router side, but it's what we've got. We've set up VirtualBox and OpenVPN Access Server on Server A. It has one network interface set to 'NAT' mode, such that OpenVPN gets its IP at 10.0.2.x, and to connect to the OpenVPN interface, I go to the local IP for the Virtualbox network adapter, 192.168.56.x, which works as I configured the appropriate ports using VBoxManage. My question is, do I need to be using Bridged Networking and give the VPN server its own IP, or is there some way to tell the server (either Windows or the Virtualbox OpenVPN) that 'any public connection on the real external IP on port X should be directed to this internal LAN address of 192.168.1.x on port Y'? OpenVPN itself doesn't seem to be aware of the server's real external IP unless we put it in Bridged networking mode; is that necessary or advisable? We're without RRAS since this is Web edition, but I feel like what we're going for is pretty simple. Thanks! Aq

    Read the article

  • iptables to block non-VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Postfix enable SSL 465 failed

    - by user221290
    I have installed the Postfix and enabled SSL/TLS, just tested, I can sent email from port 25, 578, but cannot sent email from port 465, the log is: May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server hello A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 write server done A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:SSLv3 flush data May 26 17:24:06 mail postfix/smtpd[28721]: SSL3 alert read:fatal:certificate unknown May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept:failed in SSLv3 read client certificate A May 26 17:24:06 mail postfix/smtpd[28721]: SSL_accept error from unknown[10.155.36.240]: 0 May 26 17:24:06 mail postfix/smtpd[28721]: warning: TLS library problem: 28721:error:14094416:SSL routines:SSL3_READ_BYTES:sslv3 alert certificate unknown:s3_pkt.c:1197:SSL alert number 46: May 26 17:24:06 mail postfix/smtpd[28721]: lost connection after CONNECT from unknown[10.155.36.240] May 26 17:24:06 mail postfix/smtpd[28721]: disconnect from unknown[10.155.36.240] My email server is: 10.155.34.117, and email client is: 10.155.36.240, the client error is: Could not connect to SMTP host: 10.155.34.117, port: 465. My Master.cf: smtps inet n - n - - smtpd -o smtpd_tls_wrappermode=yes My main.cf: smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_key_file = /etc/pki/myca/mail.key smtpd_tls_cert_file = /etc/pki/myca/mail.crt smtpd_tls_CAfile = /etc/pki/myca/cacert_new.pem smtpd_tls_loglevel = 2 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_scache Seems it's my certificate issue, but I have tried to grant the file many times...I have no idea on this, please help!

    Read the article

  • Netbook (Samsung N220) on Ubuntu 10.04 slows down WiFi for other computers

    - by Joachim
    I encountered a really odd problem with my new netbook. I am running Ubuntu 10.04 on a Samsung N220 Mito. So far everything worked fine. Now I tried the machine for the first time in our work group where we have a wifi (with internet access) for all laptops. The wifi is controlled by a computer running Suse 9.3 which provides a DHCP server and imposes a firewall. At the moment there is only a macbook in the wifi, where no problems with the internet or wifi connection are encountered. Now coming to my actual problem: In addition to the macbook i connect the Samsung N220 to the Wifi. Problem: My download speed is for some reason limited to 70KB/s max. This is neither a limitation of the server/website i browse on, nor a configuration of the netbook: at home i have 500KB/s download speeds. Furthermore, it is not a default limitation for "untrusted" or "new machines" in the wifi, as for instance other new laptops get full speed internet with our wifi. Problem: Once the Samsung N220 is generating traffic in the wifi, the wifi is slowed down dramatically for all other machines: I run a ping to the router from the macbook. The ping times with the N220 ideling are 2-6ms. When I start downloading or browsing in the web with the N220 the ping speed drops to 800ms. Vice versa, when the macbook is generating the traffic the ping of the N220 to the rooter stays constant at around 2-6ms. So clearly, it is some problem originating from my netbook or maybe its treatment in the wifi. Thanks for any help

    Read the article

  • Windows 7 remote desktop encryption error every few minutes

    - by rfrankel
    Because of an error in data encryption, this session will now end. This is the error I've been getting more and more frequently over the past few days, to the point that I can't ignore it because it's happening consistently within 5 minutes of connecting - sometimes within a few seconds. Both the remote and local machines are Windows 7 Pro x64. The remote machine is behind a Linksys RV082, and I'm using UPnP to forward a remote port to the correct local port. This setup had been working fine for several months, and I can't think of any recent relevant changes that might have been made. Things I've already tried: Disabling unnecessary components of the network connection on the remote machine, until only IPv4 and Client for Microsoft Networks remain. Disabling TCP large send offload on both the remote and local machines. Confirming that the remote machine is not mentioned anywhere in any DMZ settings on the Linksys router. Confirming that there are no x509-related registry keys screwing things up (this is the suggested fix for a slightly different error anyway). These are the only solutions I've been able to find after about an hour of searching, and most of them apply to XP or Server 2003 in any case. If anyone could suggest something else, it would be much appreciated.

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • Solaris 10 invalid ARP requests from 0.0.0.0? Link up/down every hour or 2

    - by JWD
    The guys at the data center where I'm hosting a server running Solaris 10 are telling me that my server is making a lot of invalid arp requests. This is an example of a portion of what was sent to me from the logs (with Mac addresses and IP addresses changed). [mymacaddress]/0.0.0.0/0000.0000.0000/[myipaddress]/[Datestamp]) It's being logged every hour. I don't see anything in the arp tables (arp -a) or routing tables (netstat -r) and I don't see anything relating to 0.0.0.0 when snoping the arp requests. The only place I see any reference to 0.0.0.0 is if I do netstat -a for the SCTP SCTP: Local Address Remote Address Swind Send-Q Rwind Recv-Q StrsI/O State ------------------------------- ------------------------------- ------ ------ ------ ------ ------- ----------- 0.0.0.0 0.0.0.0 0 0 102400 0 32/32 CLOSED But not really sure what that means. Doesn't seem like I can disable SCTP. There are some tunable SCTP parameters but it's not something I'm familiar with. Do I have to add changes to /etc/system? Looks like sctp_heartbeat_interval might be what I need to change? If it makes any difference, I have a few solaris zones running on this server, each with their own IP address on a virtual interface. eth0:0, eth0:1, etc. Does anyone have any idea what might be causing this and how to stop it? I think the switch I'm connected to doesn't like it and momentarily drops the connection. Is there anyway to at least block those requests using ipfilter or something else? Update: This was happening more frequently but now it seems to be happening roughly every hour or every two hours. It's not consistent. I tried setting setting the link speed and duplex to match the switch port and that seemed to make it stop happening for a few hours but then it started again.

    Read the article

  • Encrypted WiFi with no password?

    - by Ian Boyd
    Is there any standard that allows a WiFi connection to be encrypted, but not require a password? i know that (old, weak) WEP, and newer WPA/WPA2 require a password (i.e. shared secret). Meanwhile my own wireless connections are "open", and therefore unencrypted. There is no technical reason why i can't have an encrypted link that doesn't require the user to enter any password. Such technology exists today (see public key encryption and HTTPS). But does such a standard exist for WiFi? Note: i only want to protect communications, not limit internet access. i get the sense that no such standard exists (since i'm pretty capable with Google), but i'd like it confirmed. Claraification: i want to protect communcations, not limit internet access. That means users are not required to have a password (or its moral equivalent). This means users are not required: to know a password to know a passphrase to enter a CAPTCHA to draw a secret to have a key fob to know a PIN to use a pre-shared key have a pre-shared file to possess a certificate In other words: it has the same accessibility as before, but is now encrypted.

    Read the article

  • Why is my apache2, mod_fcgid, php configuration causing 100% cpu usage?

    - by Scott Lundgren
    Page load makes a quick initial connection, then hangs about 10 seconds before the page renders. When the server load goes up I start watching top & I see that both CPUs get pegged at times to 100% by between 4-8 processes of php-cgi. My theory is that since I never see RAM usage never go above 50%, that apache is able to handle the requests coming in, but is queueing them for PHP to process. What is wrong with my mod_fcgid/php configuration ? RHEL 5.4 2 Xeon E5420s @ 2.50 Ghz 4 Gb RAM Apache 2.2.3 Timeout 30 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 5 <IfModule worker.c> StartServers 2 MaxClients 300 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> mod_fcgid 2.2.10 LoadModule fcgid_module modules/mod_fcgid.so <IfModule !mod_fastcgi.c> AddHandler fcgid-script fcg fcgi fpl php </IfModule> SocketPath run/mod_fcgid SharememPath run/mod_fcgid/fcgid_shm DefaultInitEnv PHPRC "/etc/" FCGIWrapper /usr/bin/php-cgi .php MaxRequestsPerProcess 1500 MaxProcessCount 20 IPCCommTimeout 240 IdleTimeout 240 APC 3.0.19 extension = apc.so apc.enabled=1 apc.shm_segments=1 apc.optimization=0 apc.shm_size=32 apc.ttl=7200 APC cache is 43% used with a 99% hit rate

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • Can't find PC on network

    - by Simon Verbeke
    I just got myself a new laptop, and set it up. It is connected to the wireless internet in my home. I then wanted to create a homegroup between the laptop and my desktop, but they can't find each other. Probably because the desktop has a wired connection to the router and the laptop is connected to a wireless access point. The router and the AP are connected to a switch in the middle by cable. A sketch of the network: Laptop - - - Wireless Access Point ----- Switch ----- Router ----- Desktop ^ ^ ^ ^ Wireless Wired Wired Wired They both point to the same gateway and DHCP-server (on 192.168.0.1). And I can ping to that address from both PCs. When I try to ping either of the PCs the pings time out. The subnets are also the same (255.255.255.0) and the IPs are in the same range (192.168.0.114 laptop, 192.168.0.205 desktop). So I don't really understand what I need to do to be able to access either computer from the other. The weird thing is that Synergy (to use mouse and keyboard over the network) works, just by using the IPs assigned to both PCs. The acces point is a linksys WAP54g, but I'm unsure of the Router, it has a custom casing from our ISP and hides any clues for identifying the product. I'm going to google a bit so I can add that info later. Both PC's are Windows 7 64 bit. The desktop is Ultimate, the laptop Professional.

    Read the article

  • VMWare steals IP addresses

    - by Ishan Amin
    I'm having a peculiar problem, that I think I have narrowed down to VMware. For the past one year, every once in a while we lose internet connection and not all users (about 10 users) go down at the same time, its usually one-by-one. First someone will call me and say "Internet is down" and then we would go reset the router and modem and switch and it would be working again for a while, then go down again without any pattern or replicatable sequence. We'd go repeat the steps again to get everyone in the office running again. We called our Internet Service Provider and they constantly say, We see your modem and we see your router and from thier end everything is OK. we replaced our router and switch and modem, twice! Last friday, it dawned upon me, that everytime we turn on a VMware machine, this sequence of taking everyone down starts, which also explains the message that my users get for "IP Conflict Found" So we do alot of VMware testing and lo and behold, it takes my Internet down. My Yahoo and Gtalk would continue working but www is down when the VMware machines are started. I do use bridged networking to all the VMware machines, but I dont know what else to set it at. now, sorry for this long rambling but anyone have any clue on how to stop this? thanks IA

    Read the article

  • Is it possible to change an "Unidentified Network" into a "Home" or "Work" network on Windows 7

    - by Rhys
    I have a problem with Windows 7 RC (7100). I frequently use a crossover network cable on WinXP with static IP addresses to connect to various industrial devices (e.g. robots, pumps, valves or even other Windows PCs) that have Ethernet network ports. When I do this on Windows 7, the network connection is classed as an "Unidentified Network" in Networks and Sharing Center and the public firewall profile is enforced by Windows. I do not want to change the public profile and would prefer to use the Home or Work profile instead. For other networks like Home and Work I'm able to click on them and change the classification. This is not available for unidentified networks. My questions are these:- Is there a way to manual override the "Unidentified Network" classification? What tests are performed on the network that fail, therefore classifying it as an "Unidentified Network" By googling (hitting mainly vista issues) it seems that you need to ensure that the default gateway is not 0.0.0.0. I've done this. I've also tried to remove IPv6 but this does not seem possible on Windows 7.

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >