Search Results

Search found 4238 results on 170 pages for 'proxy pac'.

Page 62/170 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • HMVC or PAC - how to handle shared abstractions/models?

    - by fig-gnuton
    In HMVC/PAC, what's the recommended way to code if two or more triads/agents share a common model/abstraction? Do you instantiate a new instance of that model wherever needed, and propogate a change in one to all the other instances via the controllers? Or do instantiate one model at some common upper level, and inject that instance wherever needed? (Or neither if I'm missing something fundamental about these patterns?)

    Read the article

  • How to parse a raw HTTP response?

    - by Ed
    If I have a raw HTTP response as a string: HTTP/1.1 200 OK Date: Tue, 11 May 2010 07:28:30 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=UTF-8 Server: gws X-XSS-Protection: 1; mode=block Connection: close <!doctype html><html>...</html> Is there an easy way I can parse it into an HttpListenerResponse object? Or at least some kind .NET object so I don't have to work with raw responses. What I'm doing currently is extracting the header key/value pairs and setting them on the HttpListenerResponse. But some headers can't be set, and then I have to cut out the body of the response and write it to the OutputStream. But the body could be gzipped, or it could be an image, which I can't get to work yet. And some responses contain random characters everywhere, which looks like an encoding problem. It's a lot of trouble. I'm getting a raw response because I'm using SOCKS to send an HTTP request. The program I'm working on is basically an HTTP proxy that can route requests through a SOCKS proxy, like Privoxy does.

    Read the article

  • Reusing service proxies

    - by Hadi Eskandari
    I have a set of webservices that I connect to using Silverlight Client. I use proxies generated by "Add service reference" or SLSVCUTIL.exe tool to connect to this service. So far, I have only used one single service. Now I want to use another service on the same server. The problem is that, first service generated set of proxy classes for me, and second service will reuse the same set of classes (plus extra services/classes), e.g. CustomerService.SaveCustomer(Customer customer); OrderService.CheckCustomerLevel(Customer customer); The problem is when I add reference to the second service, I can not reuse the same namespace for the second one (VS error), and when I use a different namespace, the generated classes, although are essentially the same, reside in different namespace, hence different and I end up with two Customer classes in two different namespaces. Anyway around this? I just neeed to have two set of services, reusing the Customer class. I have already tried "reuse types in assembly / all assemblies" check mark when generating proxy classes, but it seems to have no effect. any help is greatly appreciated.

    Read the article

  • Why does Http Web Request and IWebProxy work at wierd times

    - by Mike Webb
    Another question about Web proxy. Here is my code: IWebProxy Proxya = System.Net.WebRequest.GetSystemWebProxy(); Proxya.Credentials = CredentialCache.DefaultNetworkCredentials; HttpWebRequest rqst = (HttpWebRequest)WebRequest.Create(targetServer); rqst.Proxy = Proxya; rqst.Timeout = 5000; try { rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } This code fails the first time it is called. After that on successive calls it works sometimes, but not others. It also never hits the catch block. Now the weird part. If I add a MessageBox.Show(msg) call in the first section of code before the GetResponse() call this all will work every time. Here is an example: try { // ========Here is where I make the call and get the response======== System.Windows.Forms.MessageBox.Show("Getting Response"); // ========This makes the whole thing work every time======== rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } I'm baffled about why it is behaving this way. I don't know if the timeout is not working (it's in milliseconds, not seconds, so should timeout after 5 seconds, right?...) or what is going on. The most confusing this is that the message box call makes it all work. So any help and suggestions on what is happening is appreciated. These are the kind of bugs that drive me absolutely out of my mind.

    Read the article

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.

    Read the article

  • Squid proxy not serving modified html content

    - by Matthew
    I'm trying to use squid to modify the page content of web page requests. I followed the upside-down-ternet tutorial which showed instructions for how to flip images on pages. I need to change the actual html of the page. I've been trying to do the same thing as in the tutorial, but instead of editing the image I'm trying to edit the html page. Below is a php script I'm using to try to do it. All jpg images get flipped, but the content on the page does not get edited. The edited index.html files written contain the edited content, but the pages the users receive don't contain the edited content. #!/usr/bin/php <?php $temp = array(); while ( $input = fgets(STDIN) ) { $micro_time = microtime(); // Split the output (space delimited) from squid into an array. $temp = split(' ', $input); //Flip jpg images, this works correctly if (preg_match("/.*\.jpg/i", $temp[0])) { system("/usr/bin/wget -q -O /var/www/cache/$micro_time.jpg ". $temp[0]); system("/usr/bin/mogrify -flip /var/www/cache/$micro_time.jpg"); echo "http://127.0.0.1/cache/$micro_time.jpg\n"; } //Don't edit files that are obviously not html. $temp[0] contains url of file to get elseif (preg_match("/(jpg|png|gif|css|js|\(|\))/i", $temp[0], $matches)) { echo $input; } //Otherwise, could be html (e.g. `wget http://www.google.com` downloads index.html) else{ $time = time() . microtime(); //For unique directory names $time = preg_replace("/ /", "", $time); //Simplify things by removing the spaces mkdir("/var/www/cache/". $time); //Create unique folder system("/usr/bin/wget -q --directory-prefix=\"/var/www/cache/$time/\" ". $temp[0]); $filename = system("ls /var/www/cache/$time/"); //Get filename of downloaded file //File is html, edit the content (this does not work) if(preg_match("/.*\.html/", $filename)){ //Get the html file contents $contentfh = fopen("/var/www/cache/$time/". $filename, 'r'); $content = fread($contentfh, filesize("/var/www/cache/$time/". $filename)); fclose($contentfh); //Edit the html file contents $content = preg_replace("/<\/body>/i", "<!-- content served by proxy --></body>", $content); //Write the edited file $contentfh = fopen("/var/www/cache/$time/". $filename, 'w'); fwrite($contentfh, $content); fclose($contentfh); //Return the edited page echo "http://127.0.0.1/cache/$time/$filename\n"; } //Otherwise file is not html, don't edit else{ echo $input; } } } ?>

    Read the article

  • Webservice proxy class generation

    - by kaivalya
    I include the below xsd file: <?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns="http://www.xxxx.com/schemas/2005/06/messages" attributeFormDefault="unqualified" elementFormDefault="qualified" targetNamespace="http://www.xxxx.com/schemas/2005/06/messages" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:include schemaLocation="xxxxCommonTypes.xsd" /> <xs:element name="HotelDetailRQ"> <xs:annotation> <xs:documentation>Request data to obtain detailed information for the specified hotel product.</xs:documentation> </xs:annotation> <xs:complexType> <xs:complexContent mixed="false"> <xs:extension base="CoreRequest"> <xs:sequence> <xs:element name="HotelCode"> <xs:annotation> <xs:documentation>Hotel code to obtain detailed inormation.</xs:documentation> </xs:annotation> <xs:simpleType> <xs:restriction base="xs:string"> <xs:minLength value="1" /> <xs:maxLength value="10" /> </xs:restriction> </xs:simpleType> </xs:element> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:element> </xs:schema> to a wsdl file via; <xsd:schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://axis.frontend.hydra.xxxx.com"> <xsd:import schemaLocation="C:\Users\xxxx\HotelDetailRQ.xsd" namespace="http://www.xxxx.com/schemas/2005/06/messages" /> </xsd:schema> The problem is when I add the wsdl file to visual studio as a web reference, it does not generate the HotelDetailRQ proxy class in reference.cs file. So I am unable to use a generated HotelDetailRQ class. I am not experienced in using xsd files or wsdl files. Can you point me to where I might be making mistake here?

    Read the article

  • ssh + tinyproxy: poor performance

    - by Paul
    I am currently in China and I would like to still visit some blocked websites (facebook, youtube). I have VPS in the USA and I have installed tinyproxy on it. I log in on my VPS with SSH port-forwarding and I have configured my browser appropriately. Everything works more or less: I can surf to those websites but everything is inusually slow and sometimes data transfer stops abruptly. This probably has to do with the fact that I see some errors in my shell on the VPS like : channel 6: open failed: connect failed: Also in the log-file of tinyproxy I see some bad things: ERROR Sep 06 14:52:14 [28150]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:15 [28153]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28168]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28151]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:15 [28143]: readbuff: recv() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:17 [28147]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:23 [28137]: writebuff: write() error "Connection reset by peer" on file descriptor 7 ERROR Sep 06 14:52:26 [28168]: getpeer_information: getpeername() error: Transport endpoint is not connected ERROR Sep 06 14:52:27 [28186]: read_request_line: Client (file descriptor: 7) closed socket before read. ERROR Sep 06 14:52:31 [28160]: getpeer_information: getpeername() error: Transport endpoint is not connected

    Read the article

  • https root webapp in jboss 5 through apache mod_proxy with ajp

    - by Jesus
    hi, i have apache 2.2.3 and jboss 5.1 installed in my server, in apache i have 2 apps in php+mysql and in jboss i have in the root app (/) liferay portal. i used mod_proxy to reach the jboss app : <VirtualHost server_ip:80> ServerName intranet.mycompany.com ProxyPreserveHost On ProxyPass / balancer://jbosscluster/ ProxyPassReverse / http://server_ip:8080 </VirtualHost> but now i have to enable https only in intranet.mycompany.com, and i dont know where configure the ssl, in apache, jboss, both. i tried in jboss in the server.xml, generating a selfsigned certificate with keytool, but apache doesnt forward to https://server_ip:8443 i will appreciate your help.

    Read the article

  • ProxyPass: discard body data

    - by Kay
    I have some rules like <Location /xyz> ProxyPass http://example.com/abc ... </Location> I want to accept requests to http://mypage.lan/xyz/123 and deliver the data of http://example.com/abc/123. I need to accept POST request, but I don't want to send the body content to example.com. I would like to send a GET request, but a POST request with Content-Length: 0 would be fine, too. Is it possible, to configure Apache 2 not to promote the request body?

    Read the article

  • Varnish VCL Reload Fails After Adding Second Backend

    - by Andy
    I have been running Varnish on my production server successfully for several weeks now. Now I'm trying to configure Varnish to use a second backend for certain requests. My original working VCL (/etc/varnish/default.vcl) begins like this: backend default { .host = "127.0.0.1"; .port = "8080"; } ...rest of VCL... And I'm changing it to: backend default { .host = "127.0.0.1"; .port = "8080"; } backend backend2 { .host = "12.34.56.78"; .port = "80"; } ...rest of VCL... When I reload the VCL file, I get the following: Command failed with error code 106 Failed to reload /etc/varnish/default.vcl. Any idea what the error could be, or how I can get more information on the problem?

    Read the article

  • Nginx, Varnish, ESI - Will that work?

    - by Roland
    I've serveral backends (one is nginx+passenger) to combine via ESI. Since I don't want to go without gzip/deflate and SSL varnish can't do the job out of the box. So I thought about the following setup: http://img693.imageshack.us/img693/38/esinginx.png What do you think? overkill?

    Read the article

  • API Management Solutions

    - by Mike
    I'm currently building an API and am looking for a tool to allow me to monitor (in a GUI) and rate limit usage. I've come across a few enterprise solutions including: http://apigee.com/ http://mashery.com/ http://www.layer7tech.com/ http://www.3scale.net/ The Apigee enterprise plan is exactly what I'm looking for but plans start at $3000 / month which is out of my price range. The other solutions are all either to expense or do not provide the solution I'm looking for. This led me to look at some open source options including: http://apiaxle.com/ https://code.google.com/p/varnish-apikey/wiki/UsageManual Varnish seems like a fairly complete solution, however I would need to build a GUI to visualise the data. My final option would to build a solution from scratch using EventMachine and ruby. Any advice?

    Read the article

  • using squid for apache?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • How to configure apache and mod_proxy_ajp in order to forward ssl client certificate

    - by giovanni.cuccu
    I've developed a java application that need a ssl client certificate and in the staging environment with apache 2.2 and mod_jk it is working fine. In production the configuration is not using mod_jk but mod_proxy_ajp. I'm looking for an apache configuration example that configure ssl and mod_proxy_ajp for sending the ssl client certificate to the java application server (which listens with the ajp protocol). Thanks a lot

    Read the article

  • loadbalancing with difference nginx location context and backend server context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • What is the purpose of netcat's "-w timeout" option when ssh tunneling?

    - by jrdioko
    I am in the exact same situation as the person who posted another question, I am trying to tunnel ssh connections through a gateway server instead of having to ssh into the gateway and manually ssh again to the destination server from there. I am trying to set up the solution given in the accepted answer there, a ~/.ssh/config that includes: host foo User webby ProxyCommand ssh a nc -w 3 %h %p host a User johndoe However, when I try to ssh foo, my connection stays alive for 3 seconds and then dies with a Write failed: Broken pipe error. Removing the -w 3 option solves the problem. What is the purpose of that -w 3 in the original solution, and why is it causing a Broken pipe error when I use it? What is the harm in omitting it?

    Read the article

  • Varnish cached 'MISS status' object?

    - by Hesey
    My site uses nginx, varnish, jboss. And some url will be cached by varnish, it depends a response header from jboss. The first time, jboss tells varnish doesn't cache this url. Then the second request, jboss tells varnish to cache, but varnish won't cache it. I used varnishstat and found that 1 object is cached in Varnish, is that the 'MISS status' object? I remove grace code and the problem still exists. When I PURGE this url, varnish works fine and cache the url then. But I can't PURGE so much urls every startup time, how can I fix this? The configuration: acl local { "localhost"; } backend default { .host = "localhost"; .port = "8080"; .probe = { .url = "/preload.htm"; .interval = 3s; .timeout = 1s; .window = 5; .threshold = 3; } } sub vcl_deliver { if (req.request == "PURGE") { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Content-Type; remove resp.http.Server; remove resp.http.Date; remove resp.http.Accept-Ranges; remove resp.http.Connection; set resp.http.keeplive="true"; } else { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } } } sub vcl_recv { if(req.url ~ "/check.htm"){ error 404 "N"; } if( req.http.host ~ "store." || req.request == "POST"){ return (pipe); } if (req.backend.healthy) { set req.grace = 30s; } else { set req.grace = 10m; } set req.http.x-cacheKey = "0"; if(req.url ~ "/shop/view_shop.htm" || req.url ~ "/shop/viewShop.htm" || req.url ~ "/index.htm"){ if(req.url ~ "search=y"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; }else if(req.url !~ "bbs=y" && req.url !~ "shopIntro=y" && req.url !~ "shop_intro=y"){ set req.http.x-cacheKey = req.http.host + "/index.htm"; } }else if(req.url ~ "/search"){ set req.http.x-cacheKey = req.http.host + "/search.htm"; } if( req.http.x-cacheKey == "0" && req.url !~ "/i/"){ return (pipe); } if (req.request == "PURGE") { if (client.ip ~ local) { return (lookup); } else { error 405 "Not allowed."; } } if (req.url ~ "/i/") { set req.http.x-shop-url = req.original_url; }else { unset req.http.cookie; } } sub vcl_fetch { set beresp.grace = 10m; #unset beresp.http.x-cacheKey; if (req.url ~ "/i/" || req.url ~ "status" ){ set beresp.ttl = 0s; /* ttl=0 for dynamic content */ } else if(beresp.http.x-varnish-cache != "1"){ set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 0s; unset beresp.http.set-cookie; } else { set beresp.do_esi = true; /* Do ESI processing */ set beresp.ttl = 1800s; unset beresp.http.set-cookie; } } sub vcl_hash { hash_data(req.http.x-cacheKey); return (hash); } sub vcl_error { if (req.request == "PURGE") { return (deliver); } else { set obj.http.Content-Type = "text/html; charset=gbk"; synthetic {"<!--ve-->"}; return (deliver); } } sub vcl_hit { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } sub vcl_miss { if (req.request == "PURGE") { error 404 "N"; } }

    Read the article

  • apache2 mod_proxy configuration for single threaded servers

    - by The Doctor What
    I have a multiple instances of thin running behind apache 2.2's mod_proxy. The problem I have is that a couple pages, by design, take a while to run. If I just configure apache the obvious way (just add the thin urls as BalanceMember lines and no other configurations) then what happens is if someone clicks on the long-running page, then if enough web requests happen while it is running, someone eventually gets the same thin server and has to wait. Does anyone have some best practices or suggested configuration for mod_proxy and thin? Ciao!

    Read the article

  • loadbalancing with difference nginx location context and backend context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • Passing IP address with mod_proxy

    - by Konrad Garus
    I have Apache with mod_proxy passing requests to Tomcat. The trouble is, when I get client IP address associated with a request in web app hosted on Tomcat, it always returns 127.0.0.1. Is it possible to have Apache pass the original IP address to Tomcat?

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • two Ubuntu 12.04, 1x Netgear router

    - by RussellHarrower
    Ok, so I have 192.168.x.14 with port 80 open and I have set the netgear router to point there then I have 192.168.x.12 with apache running but when I try to connect using chrome to the website hosted on .12 it cant find it, how do I set up .14 to also check .12 /nagios is running on .12 while /munin is running on .14 Thank you for helping I know about Squid, but I don't know what to do? Do I use that or something else.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >