Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 115/232 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • Why does mod_security require an ACCEPT HTTP header field?

    - by ripper234
    After some debugging, I found that the core ruleset of mod_security blocks requests that don't have the (optional!) ACCEPT header field. This is what I find in the logs: ModSecurity: Warning. Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"] ModSecurity: Access denied with code 400 (phase 2). Match of "rx ^OPTIONS$" against "REQUEST_METHOD" required. [file "/etc/apache2/conf.d/modsecurity/optional_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "41"] [id "960015"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER"] [hostname "example.com"] [uri "/"] [unique_id "T4F5@H8AAQEAAFU6aPEAAAAL"] Why is this header required? I understand that "most" clients send these, but why is their absence considered a security threat?

    Read the article

  • Solaris disk I/O spike... which monitoring tools do I need?

    - by bonez05
    Hi all, My Solaris 10 websever / database server disk io keeps spiking at intermittent times. Using iostat -xtc 5 the reads / sec will jump from 3.0 to 1450.0 and %busy will jump to 98% The apache access log doesn't indicate anything out of the ordinary. In other words, requests aren't any higher than usual. top doesn't generate anything useful. The cpu usage is fine with mysql using about 20% and nothing else to speak of really. What monitoring tool should I be using to see what process is using excessive disk I/O? or if there are any other suggestions I'm all ears. Thanks

    Read the article

  • Running a webserver behind a firewall I have no access to

    - by reijin
    I'm having a bad time in my student appartment: I want to run a webserver on my Laptop, which should be reachable from outside of the net. I'm sitting behind some proxy-server that passes outgoing packets to the matching server. But when it comes to incoming messages - it wouldn't route them correctly to my PC. (Seems like packets only get passed if some PC from within the student-flat is already connected to the sending server) In the past I had a small virtual private server that was sending incoming website-requests over a reverse shell to my PC. Which then returned the website content, and the visitor could see my website. Sadly I dont have that server anymore... Do you have any idea that might solve my problem? Greetings, Benedikt

    Read the article

  • Configuring Nginx for Wordpress and Rails

    - by Michael Buckbee
    I'm trying to setup a single website (domain) that contains both a front end Wordpress installation and a single directory Ruby on Rails application. I can get either one to work successfully on their own, but can't sort out the configuration that would let me coexist. The following is my best attempt, but it results in all rails requests being picked up by the try_files block and redirected to "/". server { listen 80; server_name www.flickscanapp.com; root /var/www/flickscansite; index index.php; try_files $uri $uri/ /index.php; location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/flickscansite$fastcgi_script_name; } passenger_enabled on; passenger_base_uri /rails; } An example request of the Rails app would be http://www.flickscan.com/rails/movies/upc/025192395925

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • Apache doesn't autostart because vpn isn't up yet.

    - by Col. Shrapnel
    I have a FreeBSD8 server, and VPN connection to my ISP. I use mpd5 and it works fine. A have an Apache server whch works fine, if I start it manually, after VPN is get up. But when I add it to rc.conf autostart, it fail to start, saying (49) can't assign requested address: make_sock could not bind to address I suppose it's because VPN isn't up yet and no IP address assigned to the interface which i set in the Listen directive in the httpd.conf. If i set Listen to the existing 127.0.0.1, it fail to serve wan requests. Is there a solution, either to delay apache autostart or configure it some different?

    Read the article

  • Java Applet Deployment, ClassNotFoundException (primary class)

    - by Matt
    This is driving me up the wall. I have checked and rechecked spelling and paths. I have tried just about every combination of paths, including relative, absolute, and full http paths. I continue to get the following error when trying to load a Java applet: java.lang.ClassNotFoundException: AppletClient.class at sun.plugin2.applet.Applet2ClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Unknown Source) at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source) at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception: java.lang.ClassNotFoundException: AppletClient.class The HTML used to load the applet: <applet width="100" height="100" archive="applet/myapplet.jar, applet/applet_dependency.jar" code="AppletClient.class"> <param value="blahblah" name="username"> <param value="false" name="codebase_lookup"> </applet> The applet is in a relative directory, "applet", from the path of the current page. I have unzipped the jar file and can see AppletClient.class. Also, in the source of the project, it is spelled that way (casing and all). I have tried with/without the parameters. I have changed the names of the archive jars in the applet include tag just to see if I get a different error for bad file names (same error). I have manually done GETs on the jars to make sure the server is responding to the requests (it is). I have tried with and without the codebase tag, with all different varieties of paths (start getting bad "magic number" errors on those). I know that this error sometimes pops up when a dependency fails to load, so it can be misleading, but all dependencies are present, accounted for, and are fetchable via manual GETs. Between each and every attempt I always clear my cache in FireFox. These problems are reproduced in IE8 and Chrome as well. Per my Java Console from the browser, I am running Java Plug-in 1.6.0_20. This is from the same machine that I develop the applet on, which runs fine via Eclipse. Finally, I kicked on Fiddler2, and I don't see a single request for the jar files anywhere The host site is running from my Visual Studio debugger, so it's running on localhost. But I see the requests for all the other resources on Fiddler. Just... no Jars. ANYWHERE. I clear the log, cleared my browser cache, and did a ctrl-R refresh. And still, not a single Jar request on the Fiddler log. I even did a delayed write (with JS) of the applet tag after the page loaded, once all the Fiddler activity slowed down. The element gets written to the document (and I can see the 100x100 Java error window), but not a single request shows up on Fiddler. Any suggestions, before I go crawl into the corner and cry myself to sleep? EDIT: From the Java console, if I hit "l" (el) to "dump classloader list", I see something that looks like this: Live entry: key=http://localhost:55446/BaseWebSite/,http://localhost:55446/BaseWebSite/applet/myappliet.jar, http://localhost:55446/BaseWebSite/applet/applet_dependency.jar, refCount=1, threadGroup=sun.plugin2.applet.Applet2ThreadGroup[name=http://localhost:55446/BaseWebSite/-threadGroup,maxpri=4]

    Read the article

  • Help with Apache rewriteengine rules

    - by Vinay
    Hello - I am trying to write a simple rewrite rule using the rewriteengine in apache. I want to redirect all traffic destined to a website unless the traffic originates from a specific IP address and the URI contains two specific strings. RewriteEngine On RewriteLog /var/log/apache2/rewrite_kudithipudi.log RewriteLogLevel 1 RewriteCond %{REMOTE_ADDR} ^199\.27\.130\.105 RewriteCond %{REQUEST_URI} !/StringOne [NC, OR] RewriteCond %{REQUEST_URI} !/StringTwo [NC] RewriteRule ^/(.*) http://www.google.com [R=302,L] I put these statements in my virtual host configuration. But the rewriteengine seems to be redirect all requests, whether they match the condition or not. Am I missing something? Thank you. Vinay.

    Read the article

  • What settings need to be changed to allow EC2 instances to use Amazon's Route 53 for DNS?

    - by ks78
    I have a number of Amazon EC2 instances, all running Ubuntu, which I'd like to configure to use Amazon's Route 53. I setup a script, following Shlomo Swidler's article, but ran into script-related issues, which were answered here. Now, I have the script working, but my instances are still not able to access Route 53's DNS. By this I mean, they are not able to resolve hostnames to IP addresses. My instances are currently configured with the DNS server IP address Amazon pushes out to them by default, does that need to be changed when using Route 53? I'm also IP-restricting my instances using the Security Groups. Could that be the problem? Is there a certain IP address or port I should open to allow communication with Route 53? It seems that DNS requests should be originating from my instances so the Security Groups shouldn't be an issue, but I've been wrong before. If anyone has any ideas, I'd really appreciate it.

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • Unexpected network traffic?

    - by robwalker
    My internet connection is via a fixed wireless connection using a 900MHz Motorola Canopy module. The router reports a fairly consistent 32-64Kbps of incoming traffic on the WAN port. When I attached a PC directly to the port and run Wireshark, I get a dump showing a lot of chatter from other machines that I presume are connected to the same tower. This didn't include end-to-end traffic, but was there were a lot of ARP requests, SSDP traffic, ICMP and other network discovery type stuff. Is this 'normal' or does it suggest a misconfiguration somewhere? As far as I can tell there is no need for my modem to be receiving any of this traffic (other than wanting to know what the names of my neighbours machines and printers are!) Since the internet connection is slow at the best of time, having this amount of background noise seems very wasteful.

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • Windows replacement for HAProxy

    - by GrayWizardx
    It does not appear that there is a similar question to this already posted, so I will go ahead and ask. I am working on a project that could benefit from having two - four servers handling incoming requests to a backend webservice. The service does not require SSL but does need to support occassional long running processes (upto 120 secs). This project does not at present have the funding to purchase a hardware load balancing solution. I have previously used HAProxy as a solution for this, and found it very simple and straightforward. Is there a similar product for windows (server 2003 or 2008) which provides similar configuration options and runs as a lightweight service? For reasons outside my control I cannot setup a Linux machine (physical or virtual) and so I am looking for behaviour that can be deployed on a windows machine. I can only find Perlbal which appears to fall into this category. So as not to keep this open indifenitely I will give credit to the only answer.

    Read the article

  • Server 2012 R2 DNS Conditional forwarding not working reliably, possible caching issue?

    - by Matt
    I have a bit of a home lab setup with a domain controller that is acting as the DNS server for my network. For everything, it's working fine and forwards external DNS requests to my ISP. The household recently wanted to get Netflix going and it seemed a DNS option was better than a VPN to get around the region locking, so I signed up for unblock-us.com Since I have a Windows DNS server I thought I'd be clever and make use of conditional forwarders and added the Netflix domain to the list. Initially this worked well and all devices on the network could now access Netflix, however after about an hour going to the Netflix site would result in a page cannot be found. Doing an nslookup of Netflix.com from my PC resulted in it not returning any IP addresses. As a test, I deleted the Netflix domain from the DNS servers cache and things started working again - devices could get to the site again however the same thing happens again after around half an hour to an hour. Have I missed something here that's causing it to stop working?

    Read the article

  • Sending from alternative addresses in Exchange

    - by Sam Cogan
    One of the most frequent requests I get from users with Exchange, is to be able to send from one of their alternative email addresses, that is one of the addreses there account is configured with in Exchange, but that is not their primary address. Unfortuantely as far as I am aware Microsoft have not yet come up with a solution to this. I've used a number of hacks to get round this, sepearate accounts with POP3 access, Using the from field in outlook, but each have there draw back. What have you used in these situations to allow the use of these alternative addresses?

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

  • How to make VirtualBox headless answer on rdp port?

    - by stiv
    I'd like to run windows xp on RDP: $ VBoxManage modifyvm winxp32 --vrdeport 3389 $ VBoxHeadless -s winxp32 -v on Oracle VM VirtualBox Headless Interface 4.1.18_Debian (C) 2008-2012 Oracle Corporation All rights reserved. (waiting) in another window: $ telnet localhost 3389 Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused Yes, I've read about extension: $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-4.1.20-80170.vbox-extpack 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to install "Oracle_VM_VirtualBox_Extension_Pack-4.1.20- 80170.vbox-extpack": Extension pack 'Oracle VM VirtualBox Extension Pack' is already installed. In case of a reinstallation, please uninstall it first Looked through all manuals and all help requests. No success. What's wrong? Any ideas?

    Read the article

  • SSL to SSL Redirects in IIS - Possible?

    - by Eric
    We have a situation where we would like to redirect https://service1.domain.com to https://service2.domain.com. I know this is very simple with http endpoints, but I'm not too sure about https. We have some legacy windows application web service clients that will not be updating their software version soon, and we cannot update their web references to https://service2.domain.com. Is there any way to leave these web service clients pointing to https://service1.domain.com, but have their requests forwarded to (and responded to by) https://service2.comain.com? The old server is running IIS 6.0. The new server is running IIS 7.0. We could probably upgrade it to 7.5 if needed, but I'm not certain. We could also probably make a seamless transition of the old web service to a new server using public DNS, but we cannot change the DNS name of "service1.domain.com." Thanks ServerFault!

    Read the article

  • How to is MySQL's "net_buffer_length" config: viewed and reset?

    - by blunders
    Attempt to see the "net_buffer_length" config before resetting it: mysql> show variables like "net_buffer_length"; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | net_buffer_length | 16384 | +-------------------+-------+ 1 row in set (0.00 sec) Attempt to reset "net_buffer_length" config: mysql> set global net_buffer_length=1000000; Query OK, 0 rows affected (0.00 sec) Attempt to confirm the "net_buffer_length" config has been reset: mysql> show variables like "net_buffer_length"; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | net_buffer_length | 16384 | +-------------------+-------+ 1 row in set (0.00 sec) What's wrong with the commands I'm using that result in the config not updating? MySQL Server Version: 5.1.53-community DATABASE_ENGINE: INNOdb Questions, feedback, requests -- just comment, thanks!

    Read the article

  • Google Chrome suspicious connections

    - by Poni
    I'm using Chrome at Windows and with TCPView (of the SysInternals freeware suit) I see that chrome.exe establish connections to these IPs: 173.194.37.104 209.85.146.138 Using http://www.ipaddresslocation.org/ I check about these IPs and see they're related to Google. Now, in order to clarify, these are the exact things I do: Open up chrome, the default page is set to BLANK (i.e no homepage whatsoever). Then I get into my website which has a blank page, so no "other" http requests are made. Right from this point there is a persistent connection, usually to '173.194.37.104'. What are these?? Very suspicious.. Edit #1: - I'm in 'incognito' mode right from start, when launching Chrome, using a shortcut with the '-incognito' switch. - I've turned off all phishing protections and other "advance" features in order to reduce Chrome's network activity.

    Read the article

  • RackSpace Cloud Strips $_SESSION if URL Has Certain File Extensions

    - by macinjosh
    The Situation I am creating a video training site for a client on the RackSpace Cloud using the traditional LAMP stack (RackSpace's cloud has both Windows and LAMP stacks). The videos and other media files I'm serving on this site need to be protected as my client charges money for access to them. There is no DRM or funny business like that, essentially we store the files outside of the web root and use PHP to authenticate user's before they are able to access the files by using mod_rewrite to run the request through PHP. So let's say the user requests a file at this URL: http://www.example.com/uploads/preview_image/29.jpg I am using mod_rewrite to rewrite that url to: http://www.example.com/files.php?path=%2Fuploads%2Fpreview_image%2F29.jpg Here is a simplified version of the files.php script: <?php // Setups the environment and sets $logged_in // This part requires $_SESSION require_once('../../includes/user_config.php'); if (!$logged_in) { // Redirect non-authenticated users header('Location: login.php'); } // This user is authenticated, continue $content_type = "image/jpeg"; // getAbsolutePathForRequestedResource() takes // a Query Parameter called path and uses DB // lookups and some string manipulation to get // an absolute path. This part doesn't have // any bearing on the problem at hand $file_path = getAbsolutePathForRequestedResource($_GET['path']); // At this point $file_path looks something like // this: "/path/to/a/place/outside/the/webroot" if (file_exists($file_path) && !is_dir($file_path)) { header("Content-Type: $content_type"); header('Content-Length: ' . filesize($file_path)); echo file_get_contents($file_path); } else { header('HTTP/1.0 404 Not Found'); header('Status: 404 Not Found'); echo '404 Not Found'; } exit(); ?> The Problem Let me start by saying this works perfectly for me. On local test machines it works like a charm. However once deployed to the cloud it stops working. After some debugging it turns out that if a request to the cloud has certain file extensions like .JPG, .PNG, or .SWF (i.e. extensions of typically static media files.) the request is routed to a cache system called Varnish. The end result of this routing is that by the time this whole process makes it to my PHP script the session is not present. If I change the extension in the URL to .PHP or if I even add a query parameter Varnish is bypassed and the PHP script can get the session. No problem right? I'll just add a meaningless query parameter to my requests! Here is the rub: The media files I am serving through this system are being requested through compiled SWF files that I have zero control over. They are generated by third-party software and I have no hope of adding or changing the URLs that they request. Are there any other options I have on this? Update: I should note that I have verified this behavior with RackSpace support and they have said there is nothing they can do about it.

    Read the article

  • Load balancing without a load balancer?

    - by Tom
    I have 4 nginx-powered image servers on their own subdomains which users would access at random. I decided to put them all behind a HAProxy load balancer to improve the reliability and to see the traffic statistics from a single location. It seemed like a no-brainer. Unfortunately, the move was a complete failure as the load balancer's 100mbit port was completely saturated with all requests now going through it. I was wondering what to do about this - I could get a port upgrade ($$) or return to 4 separate image servers that are randomly accessed. I thought about putting HAProxy on each image server which would in turn route to another image server if that server's nginx service was having trouble. What would you do? I would like to not have to spend too much additional money.

    Read the article

  • Make a socket as a user but make it readable and writable by another

    - by user1598585
    I have a software that is run under user A, this software creates a socket in /sockets and the socket should be readable and writable by user B. I have tried setting the directory to have ownership A:A or A:B but when user A creates the socket, it ends up with uid A and gid A. Using ACLs has not helped so far, the default mask is preventing the rights to be effective. rw permisions for B will always turn into jusr r. If what I make is not a socket it will work fine. How can I best accomplish this task? (It is for a web-server where the web-application makes the socket and the web-server software forwards requests to it)

    Read the article

  • DHCP for Multiple Subnets

    - by TheD
    So this is the current setup - essentially I would like to get my DHCP server, serving DHCP requests for two seperate subnets. Netgear DG834G acting as a modem connected to a Sonicwall Pro 2040. X0 - LAN - 192.168.1.0/24 X1 - WAN - <WAN-IP> X2 - WLAN - 192.168.10.0/24 At the moment, I have a 2008R2 server with DHCP installed, with an IP address on the 192.168.1.0/24 range handling DHCP fine for this subnet. The Sonicwall is configured correctly - anything connected to the WLAN has Full Allow to anything in the LAN, and vice versa but it will not lease an IP from my Server. I've also added another IP address to the server, so the physical NIC now has two IP's: 192.168.1.2 and 192.168.10.2 with a DHCP scope configured for each. Still no luck! Any ideas? Thanks!

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >