Search Results

Search found 6517 results on 261 pages for 'localhost'.

Page 19/261 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • iptables port forwarding works only for localhost

    - by Venki
    Below is my iptables config. I used this for my accessing a node js website running in port 9000 through port 80. This works fine only if access the website through local host / loop back. When I try to use the ip of eth0, which is assigned by my router through dcp. this does not work, when I use ip like 192.168.0.103 to access the website. I am not able to figure what is wrong here, Already burnt a day in this, still not able to figure out :( Edit: ( more information) Earlier, I was using this configuration to develop the website, i had configured the domain name to point to 127.0.0.1 in the /etc/hosts file. It was working fine, but now I am trying to deploy the website in a vps with static ip, This configuration does not work with both static IP. # redirect port 80 to port 9000 *nat :PREROUTING ACCEPT [57:3896] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [4229:289686] :POSTROUTING ACCEPT [4239:290286] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 -A OUTPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9000 COMMIT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT -A INPUT -p tcp --dport 9000 -j ACCEPT -A INPUT -j REJECT

    Read the article

  • Not able to connect to perforce server outside of localhost

    - by bobber205
    My setup is a Qwest PK5000 router with a Linksys router running Tomato. I have DMZ pointed towards my router. (The server is on the tomato router). I tried my applications that open up sockets and Utorrent (port 6883) and I ended having to do advanced port forwarding and forward specific ports in addition to having DMZ on my router. The problem is that I cannot connect to perforce when on another machine on the LAN or off. Any ideas? :) Thanks!

    Read the article

  • Getting Windows (VMware) to load from OSX's localhost without an Internet Connection

    - by Jonah Goldstein
    I'm using MAMP to host my local sites, and VirtualHostX so that I can access sites during local development via a convenient URL like mysite.dev I'm also running Windows XP via VirtualBox, and it would be great to be able to load up any of my local sites within windows while offline as currently often working without access, on the move, unfortunately. I know that I can append my IP and a nice domain name to the host file in C:/WINDOWS/system32/drivers/etc ... and i can find my IP simply through terminal with "ifconfig" while I'm online. The problem is that when I'm not online, there's no IP. Even if there is an IP (when i have a connection), I still have grab it and update the windows hosts' file all the time, since I'm developing from a laptop and have a new IP at the drop of a dime. I found a tutorial where the author is able to get a permanent IP. He uses VMware Fusion as his VMachine, which is the only difference between his setup and mine. By running the terminal command "ifconfig vmnet1" he gets: a secret IP the virtual machine uses to talk to OSX And that doesn't change - which is awesome. I'm assuming it exists even if he's offline. His tutorial is here, http://bit.ly/U2lq It would be pretty fantabulous if I could replicate this with virtualBox. Anyone have ideas? Thanks:)

    Read the article

  • Open mysql only to localhost and a particular address

    - by Rodrigo Asensio
    My config: ubuntu server 9 and msyql 5 my.cnf = bind-address = 0.0.0.0 my iptables script = iptables -A INPUT -i eth0 -s 99.88.77.66 -p tcp --destination-port 3306 -j ACCEPT I can connect from any place to mysql, not only that IP. I made a iptables-save , /etc/init.d/netwokring restart... but I still can connect from any IP, any clue ?

    Read the article

  • Cannot to connect to a Cassandra DB from localhost

    - by DJYod
    Hello, I don't know if I'm on the right site, I installed OpenSolaris a single cassandra node, I don't have other node. On the same server, I install Ruby 1.8 with the gem Cassandra. If I try to connect from my computer to the Cassandra node through the ruby gem cassandra, I can connect perfectly, if I try to to the same from the ruby gem cassandra in the server, it says that there is no listening on 127.0.0.1. I can connect locally to the instance using telnet 127.0.0.1 9160 and it works... any idea? Thank you!

    Read the article

  • How to fake ip at localhost without LoopBack.

    - by sexer
    How can i fake an ip on my own PC? for example if there were an ip address lets say 201.91.81.71, that Host is somewhere outside of my red and is hosting a webserver. How can set a website on my own PC, and when i go to browser and try to explore 201.91.81.71 it actually explore the website at my own PC? pd: I need it with IP addresses not domain names, since I need to implement it on a non-web service. First guess was installing a LoopBack with 201.191.81.71 as ip, but since some times the subnet works and some other it doesn't isn't a stable solution. Second guess was adding a route to route table : route add 201.91.81.71 mask 255.255.255.255 192.168.1.2 192.168.1.2 is the ip address of my NIC. If i could add this route it would work but windows doesn't let me do so. route add 201.91.81.71 mask 255.255.255.255 127.0.0.1 it doesn't let me set as gateway 127.0.0.1 if 201.91.81.71 isn't set in a NIC, so thats why i set sometimes loopback and this route add is auto, but it needs a subnet mask which doesn't match the ip and cannot set 255.255.255.255, im in real throubles here. can i get some help? thx.

    Read the article

  • Accessing resources on localhost using domain credentials

    - by jas
    I'm trying to set up Team Foundation Server 2010, Sharepoint Server 2010 and Report Server 2008R2. I apologize for how long my question/problem is but I'm really lost on where to even look so am being as descriptive as possible in hopes that I'm making sense. The goal: Since developers can be inside or outside the firewall there needs to be a single http point of entry to TFS that works regardless of which side of the firewall you are and needs to work with external access to SharePoint and Report Server. Meaning we have it set up in DNS so buildserver.mydomain.com: points to the build service box which contains all of the services listed at the top of this post and specific services are defined/located by the port number. This is working great on every machine inside and out except for from the build server itself. All services must be able to work using external URLs. If I use http:// buildserver.mydomain.com:4800/tfs (the external URL) from my notebook which is behind the firewall I'm able to login with my domain credentials as expected. If the other developer points to the same URL from their home which isn't on the domain they are also able to login using their domain credentials. However if I am directly on buildserver and call SharePoint, TFS or Reporting Server from (i.e. http:// buildserver.mydomain.com:4800) itself using the external URL, I am prompted for a username and password. Entering my domain credentials results in another prompt to enter my credentials again. It will prompt three times regardless of which credentials are used (I have rights as a domain admin) and then after the third prompt directs me to a blank white page as though access was denied. There are no errors displayed on the page and nothing ends up in the event viewer. From buildserver if i use just the host name (the internal URL), then I'm prompted a single time for credentials and it works. i.e. http:// buildserver:4800/tfs works from the server itself. The behavior is identical for any service requiring authentication. Meaning from the box itself Sharepoint Central Admin, SharePoint WebApp, TFS, TFS Web Access, Report Server and Report Manager all fail using the external URL but will succeed if called using the interal URL. So the problem comes into play when configuring all of the services to work together. The only way to configure TFS is locally from the server which means I must point to the internal reporting server url (http:// buildserver:4800/reports and reportServer respectively instead of http:// buildserver.domainname.com:4800 like they need to be) since external URLs aren't working from itself. If I configure TFS to use the internal URL for Report Server then creating team projects or working in the SharePoint site for the team project fails for anyone not inside the domain since their machines have no idea who http:// buildserver:/reports even is or how to resolve them. I have configured Sharepoint with Alternate Access Mappings as well as set up Report Server to listen for external URLs. The external URLs simply aren't working when called from the server itself. I hope this makes sense. Thanks for taking the time to read this rather verbose plea for help.

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • localhost name error with linux machines

    - by coderex
    Hi, CASE 1: I have a Ubuntu machine with name midhun.local I can access this in http://midhun.local/svn ... But its can't access from other machines(both Windows and Linux) through this host name. But it works with http://192.168.1.192/svn CASE 2: I have a another machine(windows) having the host-name myname:555 In this case i can access https://myname:555/svn from other windows machines with the same URL. But if am trying to access from the a Linux machine it will not work with the same URL instead of that https://192.1.168.111:555/svn will work. How can I solve the problem. I need to access via the same name from cross domain. How is it possible in LAN Thanks in advance!!

    Read the article

  • Loading guest OS's (Windows) localhost through my host's (Mountain Lion) browsers

    - by Jonah Goldstein
    For work, I have to develop in Visual Studio, which I run via VMware's fusion 5. I really want to test via my mac's native browsers for a multitude of reasons. that is, view the IIs web stuffs that my windows VM should expose, in my mac's own native Firefox, Chrome... etc. if i could expose a pretty url, that would be even better, but i would certainly settle for an ugly IP :) I got a decent number of views but no response when I asked in VMware's own boards. Everyone seems to want to go the other direction (developing in sublimetext/textmate serving up through MAMP and exposing it to windows browsers to test) and there seems to be tried a true solutions for this. unfortunately (or fortunately depending on your preference) my startup is pretty entrenched in the visual studio development tools. I'm really hoping that someone knows the answer to this. Thanks :)

    Read the article

  • Apache going straight to 100% mem usage on localhost

    - by Dennis Pedrie
    Hi, I'm running XAMPP on a OS X testing server... I'm the only person sending requests to the server. I've never messed with Apache config before, so I'm kinda without a paddle here. When I start Apache, I get ~10 httpd processes started, and 95% idle CPU. When I request a WordPress page, the CPU usage goes to 50%, and the page loads in about five seconds. It seems like once the page has finished loading, the CPU usage jumps to 100%, almost all of that httpd. A ton of processes get started, and they don't go away, and their CPU usage stays the same. I've changed the MaxRequestPerChild setting and so forth, but nothing seems to solve the problem. Even now, having not send any requests for about 15 minutes, the CPU usage is at 100%. Here's the applicable settings: Timeout 10 KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 0 MaxSpareServers 2 MaxClients 20 MaxRequestsPerChild 50 </IfModule> I had always thought that once the request was made, Apache killed the process. Is there anything I can do to bring down the CPU usage, or is this just something I'll have to deal with? Thanks for helping out an Apache idiot.

    Read the article

  • Routing and Remote Access Port Mapping not applied to localhost

    - by Computer Guru
    Hi, I've set up Routing and Remote Access (Windows Server 2003) to forward publicip:80 to a server on the private internal network, and that's working great. Incoming requests from the internet to port 80 are correctly forwarded to our internal web server and everything is fine. However, requests on the server itself are not being forwarded. That is, if I open a console window and type "telnet publicip 80" from the server on publicip, the request is not forwarded to the private server. I understand that in RRAS I've mapped port 80 on the public interface to the private server and that's why it's not working; but I don't know how to configure it so that requests from the local PC are also forwarded to the private server. I'd appreciate any help or feedback on the matter. Thanks!

    Read the article

  • MySQL command appends '@localhost' to username

    - by Mikee
    I just can't seem to figure this one out. I want to use the command line to connect to a MySQL database residing on another server. I went ahead and created the username and password for the user. I have also granted all privileges on that user for that database. When using the command: mysql -h <hostname> -u <username> -p, I get the following error: ERROR 1045 (28000): Access denied for user '<username>'@'<local_machine_hostname>' (using password: YES) The problem is that it keeps appending the current machine's hostname into the username. Obviously, that user@<local_machine_hostname> is not correct. It doesn't matter what I type. For instance, if I type: mysql -h <hostname> -u '<username>'@'<hostname>' -p It does the same, only in the error output, it says: Access denied for user '<username>@<hostname>'@'<local_machine_hostname>' Is there a setting in a configuration file which is allowing this to happen? It's really quite annoying. I need to set up a tikiwiki server, and it cannot connect because during the step where you set up mysql, it keeps appending the local machine's hostname to the mysql login name.

    Read the article

  • Port scanning from localhost

    - by Jaels
    I see lot of tcp connections on different ports on my server with 'TIME_WAIT' status. Just simple port scan, but i cant see ip address of this bastard because connections is going from my nginx. Can you please give me a tip how can i see IP address of this bastard? Here is example: [root@vh9 ~]# netstat tcp 0 0 srv:http srv:53280 TIME_WAIT tcp 0 0 srv:http srv:53536 TIME_WAIT tcp 0 0 srv:http srv:52768 TIME_WAIT tcp 0 0 srv:http srv:53024 TIME_WAIT

    Read the article

  • postfix: force server to send mail outside of localhost

    - by LoneWolfPR
    I have a php file that sends mail using the mail() function. The problem is one of the forms sends to a domain that is registerred on my server while having the mail handled on a different server. Postfix looks locally only. When it doesn't find the email address is rejects the message. How can I configure postfix to send mail to all domains through the internet and not locally? Update Ok. So it wasn't a postfix issue at all. I simply needed to turn off mail to that domain from the command line. For anyone that needs that command it is (at least on my system): /usr/local/psa/bin/domain --update example.com -mail_service false

    Read the article

  • MySQL stops accepting connections over 3306, still working on localhost

    - by Ben Dilts
    I have a MySQL database that stopped accepting connections from my web server altogether. So I SSH'ed into the server and started checking its vitals. The hard disks had plenty of open space, and there was plenty of available memory and swap space. Nothing was eating up the CPU (close to 100% idle). I even connected to MySQL locally and ran a few queries without any issues. But SHOW PROCESSLIST only showed my own connection, no others. Worst of all, in the MySQL log, no errors even remotely coincided with the unavailability of the server. On the web server, I got an error saying "Lost connection to MySQL server during query" at the moment the unavailability started, followed by a bunch of "MySQL server has gone away" errors. There's only one other application on the server that accepts network connections, and I killed that one (in case it was holding too many open connections or something), but it didn't help. Finally I just restarted the MySQL process, and everything is (for now) working again. What else should I check in these circumstances? Any idea what the problem might be? And how might I verify that is in fact the problem?

    Read the article

  • Accessing localhost:8080 through local network

    - by Theron Luhn
    I'm developing a Python WSGI website. I'm running a Paste development server on my Mac (OS X 10.7) on port 8080. I want to test the website on some other devices and OSs I have connected to the local network (Windows 7 VM, iPad, iPhone, etc.), but am having trouble. I turned on Web Sharing, and am able to access that (port 80) without a problem on all my devices. Port 8080 still doesn't work. An excerpt from my Paste configuration: [server:main] use = egg:waitress#main host = 127.0.0.1 port = 8080 The OS X firewall (Settings - Security - Firewall) is off. I have no other firewall software installed. My network is through a Linksys WRT160N router. I haven't done much with the settings, so most of them are at their defaults. I've been Googling all morning, but can't find a solution.

    Read the article

  • wamp alias appearing in localhost instead of another

    - by tournskeud
    I created various aliases on Wamp to be able to work on my different projects. Strangely, one of my alias is visible when I call the other. They have the same ".conf" file : Example : ##### ## x.dev ## DOMAINE x ##### NameVirtualHost x.dev <VirtualHost x.dev> DocumentRoot C:/wamp/www/x/ ServerName x.dev ServerAlias www.x.dev en.x.dev </VirtualHost> Also, I have a "Hosts" file including both of the alias. Wamp config : PHP: 5.4.12 Apache: 2.4.4 Someone have an idea of what is going on ? Thanks a lot in advance,

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • .htaccess redirect working on localhost but not on server

    - by Thread7
    I want users who hit my web site's root directory to be sent to a subdirectory. So anyone going to: http://MyDomain.com or /index.php would be sent to http://MyDomain.com/subdir I used the .htaccess file to successfully do this on my local machine (with Apache 2). But it doesn't work on the server? Users still see the default index.php in the root directory. Here is my simple .htaccess file. Any ideas? RewriteEngine On Redirect /index.php http://MyDomain.com/subdir/ Now my httpd.conf file AccessFileName .htaccess <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory "/var/www/icons"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> <Directory "/var/www/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> <IfModule mod_negotiation.c> <IfModule mod_include.c> <Directory "/var/www/error"> AllowOverride None Options IncludesNoExec AddOutputFilter Includes html AddHandler type-map var Order allow,deny Allow from all LanguagePriority en es de fr ForceLanguagePriority Prefer Fallback </Directory> </IfModule> </IfModule>

    Read the article

  • Warning: Memcache::connect(0memcache.connect0): Can't connect to localhost:11211, Connection refuse

    - by Stick it to THE MAN
    I am using Symfony 1.3.2 with Propel ORM on Ubuntu 9.10. I am incorporating memcache to the website. I have modified the setup() method in apps/frontend/ProjectConfiguration.class.php like this: class ProjectConfiguration { public function setup() { // original SF generated code here .. require_one sfConfig::get('sf_lib_dir').'/MyCache.class.php'; myCache::init(); } } my cache singleton is implemented something like this: class MyCache { private static memcache = null; private static inited = false; public static init() { if (self::$inited) return; self::$memcache = new Memcache(); if (self::$memcache->connect('localhost', 11211) { // Do some stuff .. self::$inited = true; } } } Warning: Memcache::connect(0memcache.connect0): Can't connect to localhost:11211, Connection refused(111) in /path_to_class/MyCache.class.php This happens for both CLI (e.g. running SF tasks) or for web access through the browser. Does anyone know how to resolve this (I suspect its something to do with Linux user privileges). As an aside, I am aware that SF prvoides an sfAPCache wrapper class for cacheing. I am intentionally not using it for two reasons: I cannot find any comprehensive (and up to date) docs on this class I want to learn the memcache API directly, since I will be accesing it from other languages.

    Read the article

  • Remote JMS connection still using localhost

    - by James
    I have a created a JMS Connection Factory on a remote glassfish server and want to use that server from a java client app on my local machine. I have the following configuration to get the context and connection factory: Properties env = new Properties(); env.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory"); env.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming"); env.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl"); env.setProperty("org.omg.CORBA.ORBInitialHost", JMS_SERVER_NAME); env.setProperty("org.omg.CORBA.ORBInitialPort", "3700"); initialContext = new InitialContext(env); TopicConnectionFactory topicConnectionFactory = (TopicConnectionFactory) initialContext.lookup("jms/MyConnectionFactory"); topicConnection = topicConnectionFactory.createTopicConnection(); topicConnection.start(); This seems to work and when I delete the ConnectionFactory from the glassfish server I get a exception indicating that is can't find jms/MyConnectionFactory as expected. However when I subsequently use my topicConnection to get a topic it tries to connect to localhost:7676 (this fails as I am not running glassfish locally). If I dynamically create a topic: TopicSession pubSession = topicConnection.createTopicSession(false, Session.AUTO_ACKNOWLEDGE); Topic topic = pubSession.createTopic(topicName); TopicPublisher publisher = pubSession.createPublisher(topic); Message mapMessage = pubSession.createTextMessage(message); publisher.publish(mapMessage); and the glassfish server is not running locally I get the same connection refused however, if I start my local glassfish server the topics are created locally and I can see them in the glassfish admin console. In case you ask I do not have jms/MyConnectionFactory on my local glassfish instance, it is only available on the remote server. I can't see what I am doing wrong here and why it is trying to use localhost at all. Any ideas? Cheers, James

    Read the article

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >