Search Results

Search found 21500 results on 860 pages for 'ajax request'.

Page 426/860 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • CPU, Memory, Network, IO resources are under utilized when I tried various JMeter load testing

    - by Jaiganesh
    CPU, Memory, Network, IO resources are under utilized when I tried various JMeter load testing. I have given below the details. Hardware: 1 Core with 2 GB RAM OS: Ubuntu 12.04 LTS Server Edition Application: PHP (using JQuery, Ajax) JMeter Parameters: 10, 20, 30, 40 Hits per minute 220 Test Cases per hit 2.03 MB per hit I am not clear, why these resources are under utilized. Please help me to resolve this.

    Read the article

  • windows 2003 under Hyper-V - can't send/receive ping

    - by glaucon
    I've installed Windows 2003 x64 R2 SP2 under Hyper-V (the Windows Pro 8 edition). I have a NIC configured but I can't move any traffic on it. In particular I can't send or receive Pings. Scoreboard There is a second VM running Ubuntu under the Windows 8 host which is able to send and receive pings from the host O/S . When I try to ping from Windows 2003 guest to Windows 8 host I get 'Request Timed Out'. When I try to ping from Windows 8 host to Windows 2003 guest I get 'Reply from 192.168.10.107 Destination Host Unreachable'. There's no problem pinging from the Ubuntu guest to the Windows 8 host and no problem pinging from the Windows 8 host to the Unbuntu guest. Environment Integration services are installed on Windows 2003. The windows 2003 needs a static IP address of 192.168.10.15. The Windows 2003 ipconfig output looks like this : While the host o/s ipconfig output looks like this : Event Logs The only things I can see in the event logs which is (a) looks signifcant and (b) is not related to the lack of networking is this : I'm not sure if that's significant or not. Hyper-V and NICs When the Windows 2003 guest was first booted it had no NIC; I subsequently added a 'Legacy Network Connector' which I couldn't get Windows 2003 to recognise; I subsequently removed that and added a 'Standard Network Connector' and at least on the surface this works ... only it doesn't. 'Virtual Network Type' is external. Although I've only mentioned ping there's no other evidence of network activity. 'Allow incoming echo request' is enabled on the Windows 2003 guest. HELP ? What else should I look at or do to resolve this problem ? EDIT 1: I should have said that I turned off the firewall on the W2003 server for a while and retested the pings; same result.

    Read the article

  • Hudson authentication via wget is return http error 302

    - by Rafael
    I'm trying to make a script to authenticate in hudson using wget and store the authentication cookie. The contents of the script is this: wget \ --no-check-certificate \ --save-cookies /home/hudson/hudson-authentication-cookie \ --output-document "-" \ 'https://myhudsonserver:8443/hudson/j_acegi_security_check?j_username=my_username&j_password=my_password&remember_me=true' Unfortunately, when I run this script, I get: --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/j_acegi_security_check? j_username=my_username&j_password=my_password&remember_me=true Resolving myhudsonserver... 127.0.0.1 Connecting to myhudsonserver|127.0.0.1|:8443... connected. WARNING: cannot verify myhudsonserver's certificate, issued by `/C=Unknown/ST=Unknown/L=Unknown/O=Unknown/OU=Unknown/CN=myhudsonserver': Self-signed certificate encountered. HTTP request sent, awaiting response... 302 Moved Temporarily Location: https://myhudson:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F [following] --2011-02-03 13:39:29-- https://myhudsonserver:8443/hudson/;jsessionid=087BD0B52C7A711E0AD7B8BD4B47585F Reusing existing connection to myhudsonserver:8443. HTTP request sent, awaiting response... 404 Not Found 2011-02-03 13:39:29 ERROR 404: Not Found. There's no error log in any of hudson's tomcat log files. Does anyone has any idea about what might be happening? Thanks.

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 120

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • java max heap size, how much is too much

    - by brad
    I'm having issues with a JRuby (rails) app running in tomcat. Occasionally page requests can take up to a minute to return (even though the rails logs processed the request in seconds so it's obviously a tomcat issue). I'm wondering what settings are optimal for the java heap size. I know there's no definitive answer, but I thought maybe someone could comment on my setup. I'm on a small EC2 instance which has 1.7g ram. I have the following JAVA_OPTS: -Xmx1536m -Xms256m -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled My first thought is that Xmx is too high. If I only have 1.7gb and I allocated 1.5gb to java, i feel like I'll get a lot of paging. Typically my java process shows (in top) 1.1g res memory and 2g virtual. I also read somewhere that setting the Xms and Xmx to the same size will help as it eliminates time spend on memory allocation. I'm not a java person but I've been tasked with figuring out this problem and I'm trying to find out where to start. Any tips are greatly appreciated!! update I've started analyzing the garbage collection dumps using -XX:+PrintGCDetails When i notice these occasional long load times, the gc logs go nuts. the last one I did (which took 25s to complete) I had gc log lines such as: 1720.267: [GC 1720.267: [DefNew: 27712K->16K(31104K), 0.0068020 secs] 281792K->254096K(444112K), 0.0069440 secs] 1720.294: [GC 1720.294: [DefNew: 27728K->0K(31104K), 0.0343340 secs] 281808K->254080K(444112K), 0.0344910 secs] about 300 of them on a single request!!! Now, I don't totally understand why it's always GC'ng from ~28m down to 0 over and over.

    Read the article

  • Public-to-Public IPSec tunnel: NAT confusion

    - by WuckaChucka
    I know this is possible -- and apparently fairly common with larger companies that don't/can't route private addresses for overlap reasons -- but I can't wrap my head around how to get this to work. I'm playing around with pfSense, Vyatta and a Cisco 5505 right now, hardware-wise. So here's my setup: WEST: Vyatta outside: 10.0.0.254/24 inside: 172.16.0.1/24 machine a: 172.16.0.200/24 EAST: Cisco 5505 outside: 10.0.0.210/24 inside: 192.168.10.1 machine b (webserver): 192.168.10.2 So what we're trying to do is this: route traffic across the tunnel from machine A to machine B without using private addresses. i.e. 172.16.0.200 makes a TCP request to 10.0.0.210:80, and as far as EAST is concerned, it sees a src IP of 10.0.0.254. On WEST, I have your typical many-to-one Source NAT to translate 172.16.0.0/24 to 10.0.0.254 and that's confirmed to be working. Also on WEST, I have the following IPSec config: Local IP: 10.0.0.254 Peer IP: 10.0.0.210 local subnet: 10.0.0.254/32 remote subnet: 10.0.0.210/32 I have the reversed configuration on EAST. What happens when I make a request from machine A to 10.0.0.210:80 is that the SNAT translates the private address of machine A to 10.0.0.254 and it's routed out (and discarded at the other end) without establishing the tunnel. What I'm assuming is happening is that the inside interface on WEST receives a packet from 172.16.0.200 and since this doesn't match the local subnet defined in the tunnel configuration, it's not processed by the IPSec engine and the tunnel is not established. How do you make this work? Seems like a chicken and egg thing with the NAT and IPSec and I just can't wrap my head around how this can be done: can I say, "if a packet is received on the inside interface with a destination of 10.0.0.210, translate it to 10.0.0.254 before the IPSec engine inspects it"?

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • Using secure proxies with Google Chrome

    - by cYrus
    Whenever I use a secure proxy with Google Chrome I get ERR_PROXY_CERTIFICATE_INVALID, I tried a lot of different scenarios and versions. The certificate I'm using a self-signed certificate: openssl genrsa -out key.pem 1024 openssl req -new -key key.pem -out request.pem openssl x509 -req -days 30 -in request.pem -signkey key.pem -out certificate.pem Note: this certificate works (with a warning since it's self-signed) when I try to setup a simple HTTPS server. The proxy Then I start a secure proxy on localhost:8080. There are a several ways to accomplish this, I tried: a custom Node.js script; stunnel; node-spdyproxy (OK, this involves SPDY too, but later... the problem is the same); [...] The browser Then I run Google Chrome with: google-chrome --proxy-server=https://localhost:8080 http://superuser.com to load, say, http://superuser.com. The issue All I get is: Error 136 (net::ERR_PROXY_CERTIFICATE_INVALID): Unknown error. in the window, and something like: [13633:13639:1017/182333:ERROR:cert_verify_proc_nss.cc(790)] CERT_PKIXVerifyCert for localhost failed err=-8179 in the console. Note: this is not the big red warning that complains about insecure certificates. Now, I have to admit that I'm quite n00b for what concerns certificates and such, if I'm missing some fundamental points, please let me know.

    Read the article

  • Windows 7 machine, can't connect remotely until after ping

    - by rjohnston
    I have a Windows 7 (Home Premium) machine that doubles as a media centre and subversion server. There's a couple of problems with this setup, when connecting to the server from an XP (SP3) machine: Firstly, the machine won't respond to it's machine name until after it's IP address has been pinged. Here's an example: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\Rob>ping damascus Ping request could not find host damascus. Please check the name and try again. C:\Documents and Settings\Rob>ping 192.168.1.17 Pinging 192.168.1.17 with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time=2ms TTL=128 ... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 2ms, Average = 1ms C:\Documents and Settings\Rob>ping damascus Pinging damascus [192.168.1.17] with 32 bytes of data: Reply from 192.168.1.17: bytes=32 time<1ms TTL=128 .... Ping statistics for 192.168.1.17: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 1ms, Average = 0ms C:\Documents and Settings\Rob> Likewise, subversion commands with either the machine name or IP address will fail until the machine's IP address is pinged. Occasionally, the machine won't respond to pings on it's IP address, it'll just come back with "Request timed out". The svn server is VisualSVN, if that helps... Any ideas?

    Read the article

  • APC not caching many files

    - by tetranz
    Hello I have a Drupal site running on a VPS at Linode with PHP 5.2.10 and APC 3.1.6. It never caches more than about 25 files and barely uses any of its available memory. Drupal has hundreds of php files. I have another server where APC seems to work well and does indeed cache hundreds of files. The only difference with that site is that it runs Ubuntu 10.04 and php 5.3.2. The config settings are the same. What could be wrong? I'll paste the config from apc.php below. This is after hitting multiple parts of Drupal. Thanks APC Version 3.1.6 PHP Version 5.2.10-2ubuntu6.5 APC Host xxx.example.com Server Software Apache/2.2.12 (Ubuntu) Shared Memory 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking) Start Time 2010/12/02 11:32:17 Uptime 3 minutes File Upload Support 1 File Cache Information Cached Files 21 ( 1.4 MBytes) Hits 169 Misses 21 Request Rate (hits, misses) 1.00 cache requests/second Hit Rate 0.89 cache requests/second Miss Rate 0.11 cache requests/second Insert Rate 0.17 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 1M apc.mmap_file_mask apc.num_files_hint 1000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.shm_segments 1 apc.shm_size 32M apc.slam_defense 1 apc.stat 1 apc.stat_ctime 0 apc.ttl 0 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock 1

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • IIS 6 + ASP.NET web service - DW20 and stackoverflow exception

    - by pcampbell
    Consider an ASP.NET SOAP web service that starts up fine, but craters hard when receiving its first hit. Please note that this is deployment works in the Test environment, but not in the PreProd environment. Both are Windows 2003 SP3 + IIS 6 + ASP.NET 3.5. All up-to-date. The behaviour that we're seeing is: restart the site & app pool the app pool is configured to run under Network Service. browsing to the .asmx and .wsdl responds normally, as expected. send a normal well-formed SOAP request / normal payload to the web service 100% CPU usage after 5 seconds, the page request / site returns "Service Unavailable" no entry is created in the IIS log file (i.e. c:\windows\system32\logfiles\W3C-foo) the app pool ends up being stopped The processes that hit the CPU hard are dw20.exe. I am unsure why Dr Watson is involved here. Event Log shows an ASP.NET Runtime error: Task Manager: Event log text: EventType clr20r3, P1 w3wp.exe, P2 6.0.3790.3959, P3 45d6968e, P4 errormanagement, P5 1.0.0.0, P6 4b86a13f, P7 24, P8 0, P9 system.stackoverflowexception, P10 NIL. Questions Any thoughts on what this system.stackoverflow exception might be? Given that the code is the same between environments, might it be a payload problem? Could it be a configuration issue? You can see the name of my .NET assembly there in the exception message: "ErrorManagement"

    Read the article

  • Apache ProxyPass with SSL

    - by BBonifield
    I have a QA setup that consists of multiple internal development servers and one world-accessible provisioning machine that is setup to proxy pass the web traffic. Everything works fine for non-SSL requests, but I'm having a hard time getting the SSL logic working as well. Here's a few example vhost blocks. <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev1.site.com </VirtualHost> <VirtualHost 192.168.168.101:443> ProxyPreserveHost On SSLProxyEngine On ProxyPass / https://192.168.168.111/ ServerName dev2.site.com </VirtualHost> <VirtualHost 192.168.168.101:80> ProxyPreserveHost On ProxyPass / http://192.168.168.111/ ServerName dev2.site.com </VirtualHost> I end up seeing the following error in the provisioner's error log. [Fri Jan 28 12:50:59 2011] [warn] [client 1.2.3.4] proxy: no HTTP 0.9 request (with no host line) on incoming request and preserve host set forcing hostname to be dev1.site.com for uri / As well as the following entry in the destination QA machine's access log. 192.168.168.101 - - [22/Feb/2011:08:34:56 -0600] "\x16\x03\x01 / HTTP/1.1" 301 326 "-" "-"

    Read the article

  • Subdomains and address bar

    - by Priednis
    I have a fairly noob question about how subdomains work. As I understand at first the DNS server specifies that a request for certain subdomain.domain.com has to go to the IP address of domain.com, and the webserver at domain.com further processes the request and displays the needed subdomain page. It is not entirely clear to me how (for example Apache) server does it. As I understand there can be entries in vhosts.conf file which specify folders that contain the subdomain data. Something like: <VirtualHost *> ServerName www.domain.com DocumentRoot /home/httpd/htdocs/ </VirtualHost> <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost> and there also can be redirect entries in .htaccess files like rewritecond %{http_host} ^subdomain.domain.com [nc] rewriterule ^(.*)$ http://www.domain.com/subdomain/ [r=301,nc] however in this case the user gets directed to the directory which contains the subdomain data but the user gets "out" of the subdomain. I would like to know - how, when going to subdomain.domain.com the subdomain.domain.com, beginning of address remains visible in the address bar of the explorer? Can it be done by an alternate entry in .htaccess file? If a VirtualHost entry is specified in the vhosts.conf file, does it mean, that a new user account has to be specified for access to this directory?

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Firefox is very slow when establish SSL sessions

    - by yanglei
    Using wireshark, I discovered that Firefox v3.0 gets stuck every time before "client key exchange, change cipher spec" stage when establishing a SSL session. Specifically, it takes 0.8~1.8 second before Firefox send "Client Key Exchange" request. This is unacceptable since our application is HTTPS only. I tested this on IE6 and IE8, both works well. Any clues? [Update] Finally, I found the reason of 1 ~ 2 seconds stuck by displaying all captured packets in Wireshark. After the "server hello" stage, Firefox makes a request to ocsp.verisign.com combined with an additional DNS lookup for that domain. Firefox must wait the revocation status from OCSP before entering the next stage of SSL. Depends on whether DNS cache is in effect, this process takes 1 ~ 2 seconds. A interesting observation is that the IP packet contains "client key exchange" has a high possibility to get lost and thus a TCP retransmission is necessary. When this happens, the process can take 3 seconds at worst. I'm not sure if this is a coincidence or a bug. Anyway, here is the result from Wireshark: (delta-time) 0.369296 src-ip dst-ip TCP [ACK] Seq=161 Ack=2741 Win=65340 Len=0 2.538835 src-ip dst-ip TLSv1 Client Key Exchange, Change Cipher Spec, Finished 2.987034 src-ip dst-ip TLSv1 [TCP Retransmission] Client Key Exchange, Change Cipher Spec, Finished The difference between Firefox and IE is this: Firefox 3 enables OCSP checking by default where as IE only supports it. So, there is no problem with both IE6 and IE8. This is indeed a "certificate revoke" problem. Thanks

    Read the article

  • Nginx with http/https - Http seemed redirected to https all the time

    - by dwarfy
    I've this really weird behaviour with my ubuntu 10.04 / nginx 1.2.3 server. Basically I changed the SSL certificates this morning. And ever since it has been behaving weirdly on all apps. Godaddy is reporting that HTTPS/SSL setup is correct. When I try a page it still works correctly when I'm using HTTPS. But when I try using HTTP nginx reports error : 400 Bad Request The plain HTTP request was sent to HTTPS port After looking around on google for hours, I've tried different setup (while originaly my setup was working correctly for longtime, I just renewed certificates) I kindof found a half solution by adding this to my config : error_page 497 $request_uri; The realllly weird thing is that when I use this setup : server { listen 80; server_name john.johnrocks.eu; access_log /home/john/envs/john_prod/nginx_access.log; error_log /home/john/envs/john_prod/nginx_error.log; location / { uwsgi_pass unix:///home/john/envs/john_prod/john.sock; include uwsgi_params; } location /media { alias /home/john/envs/john_prod/johntab/www; } location /adminmedia { alias /home/john/envs/john_prod/johntab/www/adminmedia; } } I still have the same error when using HTTP (while nothing is setup for HTTPS here)?? I'm getting crazy on this !

    Read the article

  • Looking for equivalent of ProxyPassReverseMatch in Apache to fix missing trailing forward slash issue

    - by Alex Man
    I have two web servers, www.example.com and www.userdir.com. I'm trying to make www.example.com as the front end proxy server to serve requests like in the format of http://www.example.com/~username such as http://www.example.com/~john/ so that it sends an internal request of http://www.userdir.com/~john/ to www.userdir.com. I can achieve this in Apache with ProxyPass /~john http://www.userdir.com/~john ProxyPassReverse /~john http://www.userdir.com/~john The ProxyPassReverse is necessary as without it a request like http://www.example.com/~john without the trailing forward slash will be redirected as http://www.userdir.com/~john/ and I want my users to stay in the example.com space. Now, my problem is that I have a lot of users and I cannot list all those user names in httpd.conf. So, I use ProxyPassMatch ^(/~.*)$ http://www.userdir.com$1 but there is no such thing as ProxyPassReverseMatch in Apache. Without it, whenever the trailing forward slash is missing in the URL, one will be directed to www.userdir.com, and that's not what I want. I also tried the following to add the trailing forward slash RewriteCond %{REQUEST_URI} ^/~[^./]*$ RewriteRule ^/(.*)$ http://www.userdir.com/$1/ [P] but then it will render a page with broken image and CSS because they are linked to http://www.example.com/images/image.gif while it should be http://www.example.com/~john/images/image.gif. I have been googling for a long time and still can't figure out a good solution for this. Would really appreciate it if any one can shed some light on this issue. Thank you!

    Read the article

  • Bad network performance on KVM guest

    - by Devator
    I have a dedicated server connected to a 1000 Mbit port. However, the Debian guest is only getting half to a 1/4 the speeds: On the node itself (Linux node 2.6.32-279.9.1.el6.x86_64 #1 SMP Tue Sep 25 21:43:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:11-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[====================================>] 1,048,576,000 100M/s in 10s 2012-11-11 23:10:21 (100 MB/s) - â/dev/nullâ On the guest (Debian 6.0.5, x64: Linux debian 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux): wget http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin -O /dev/null --2012-11-11 23:10:41-- http://www.bbned.nl/scripts/speedtest/download/file1000mb.bin Resolving www.bbned.nl... 62.177.144.181 Connecting to www.bbned.nl|62.177.144.181|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1048576000 (1000M) [application/octet-stream] Saving to: â/dev/nullâ 100%[=================================================================================================================================================================================================>] 1,048,576,000 16.5M/s in 42s 2012-11-11 23:11:23 (23.8 MB/s) - â/dev/nullâ I use the virtio NIC. I tried some more NICs: e1000 and the Realtek 8139 but those yield even worse results. Anyone has an idea how to improve these speeds?

    Read the article

  • How can I connect integrated webcam with virtualbox

    - by Mike Stumpf
    I am trying to use a Windows XP VM for VirtualBox on my Windows 8.1 laptop. I have tried the usual attaching USB device but I get an error saying "USB device is busy with previous request". My webcam is not active in any applications and this happens after a clean reboot of the host, the guest, and VirtualBox. Here are the details: Host -HP Pavilion 17 Notebook PC (stock) -Windows 8.1 -AMD A10-5750M APU -HP Truevision HD (integrated webcam) VM I got the VM here: http://www.modern.ie/en-us/virtualization-tools VirtualBox -VirtualBox 4.3.12 installed -VirtualBox Extension pack installed -Guest additions are installed for 4.3.12 -Enable USB Controller is checked -It does not matter if enable 2.0 controller is checked or not -It does not matter if a USB device filter is set up for the webcam or not -Here is the error message: Failed to attach the USB device DDFEQ01G45BFBV HP Truevision HD [0004] to the virtual machine IE8 - WinXP. USB device 'DDFEQ01G45BFBV HP Truevision HD' with UUID {7a2e2a45-974d-482b-9b4e-9f9abbcd0ebb} is busy with a previous request. Please try again later. Result Code: E_INVALIDARG (0x80070057) Component: HostUSBDevice Interface: IHostUSBDevice {173b4b44-d268-4334-a00d-b6521c9a740a} Callee: IConsole {8ab7c520-2442-4b66-8d74-4ff1e195d2b6} I read on some VirtualBox forums that disabling USB 2.0 support in the host BIOS solved their issue but I wanted to know if there were any other ideas before I muck around in there. Thanks

    Read the article

  • SSH X11 forwarding does not work. Why?

    - by Ole Tange
    This is a debugging question. When you ask for clarification please make sure it is not already covered below. I have 4 machines: Z, A, N, and M. To get to A you have to log into Z first. To get to M you have to log into N first. The following works: ssh -X Z xclock ssh -X Z ssh -X Z xclock ssh -X Z ssh -X A xclock ssh -X N xclock ssh -X N ssh -X N xclock But this does not: ssh -X N ssh -X M xclock Error: Can't open display: The $DISPLAY is clearly not set when logging in to M. The question is why? Z and A share same NFS-homedir. N and M share the same NFS-homedir. N's sshd runs on a non standard port. $ grep X11 <(ssh Z cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes $ grep X11 <(ssh N cat /etc/ssh/ssh_config) ForwardX11 yes # ForwardX11Trusted yes N:/etc/ssh/ssh_config == Z:/etc/ssh/ssh_config and M:/etc/ssh/ssh_config == A:/etc/ssh/ssh_config /etc/ssh/sshd_config is the same for all 4 machines (apart from Port and login permissions for certain groups). If I forward M's ssh port to my local machine it still does not work: terminal1$ ssh -L 8888:M:22 N terminal2$ ssh -X -p 8888 localhost xclock Error: Can't open display: A:.Xauthority contains A, but M:.Xauthority does not contain M. xauth is installed in /usr/bin/xauth on both A and M. xauth is being run when logging in to A but not when logging in to M. ssh -vvv does not complain about X11 or xauth when logging in to A and M. Both say: debug2: x11_get_proto: /usr/bin/xauth list :0 2>/dev/null debug1: Requesting X11 forwarding with authentication spoofing. debug2: channel 0: request x11-req confirm 0 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. I have a feeling the problem may be related to M missing in M:.Xauthority (caused by xauth not being run) or that $DISPLAY is somehow being disabled by a login script, but I cannot figure out what is wrong.

    Read the article

  • Security for university research lab systems

    - by ank
    Being responsible for security in a university computer science department is no fun at all. And I explain: It is often the case that I get a request for installation of new hw systems or software systems that are really so experimental that I would not dare put them even in the DMZ. If I can avoid it and force an installation in a restricted inside VLAN that is fine but occasionally I get requests that need access to the outside world. And actually it makes sense to have such systems have access to the world for testing purposes. Here is the latest request: A newly developed system that uses SIP is in the final stages of development. This system will enable communication with outside users (that is its purpose and the research proposal), actually hospital patients not so well aware of technology. So it makes sense to open it to the rest of the world. What I am looking for is anyone who has experience with dealing with such highly experimental systems that need wide outside network access. How do you secure the rest of the network and systems from this security nightmare without hindering research? Is placement in the DMZ enough? Any extra precautions? Any other options, methodologies?

    Read the article

  • Haproxy not properly passing on X-Forwarded-For header

    - by JesseP
    I have backend web servers that receive requests by way of haproxy-nginx-fastcgi. The web app used to see multiple ip's coming through in the X-Forwarded-For header, chained together with commas (most original IP on the left). At some point in the recent past (just noticed, so not sure what caused it) something changed, and now I'm only seeing a single IP passed in the header to my web application. I've tried with haproxy 1.4.21 and 1.4.22 (recent upgrade) with the same behavior. Haproxy has the forwardfor header set: option forwardfor Nginx fastcgi_params config defines this header to be passed to the app: fastcgi_param HTTP_X_FORWARDED_FOR $http_x_forwarded_for; Anyone have any ideas on what might be going wrong here? EDIT: I just started logging the $http_x_forwarded_for variable in nginx logs, and nginx is only ever seeing a single IP, which shouldn't ever be the case, as we should always see our haproxy ip added in there, right? So, issue must either be in nginx handling of the variable coming in, or haproxy not building it properly. I'll keep digging... EDIT #2: I enabled request and response header logging in HAProxy, and it is not spitting anything out for X-Forwarded-For, which seems very odd: Oct 10 10:49:01 newark-lb1 haproxy[19989]: 66.87.95.74:47497 [10/Oct/2012:10:49:01.467] http service/newark2 0/0/0/16/40 301 574 - - ---- 4/4/3/0/0 0/0 {} {} "GET /2zi HTTP/1.1" O Here are the options i set for this in my frontend: mode http option httplog capture request header X-Forwarded-For len 25 capture response header X-Forwarded-For len 25 option httpclose option forwardfor EDIT #3: It really seems like haproxy is munging the header and just passing on a single one to the backend. This is fairly impacting to our production service, so if anyone has an ideas it would be greatly appreciated. I'm stumped... :(

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >