Search Results

Search found 37180 results on 1488 pages for 'proxy pass request failed'.

Page 76/1488 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

  • Wake On LAN on request [closed]

    - by honzas
    Hi, I have a small home network with the router capable of running OpenWRT, is there some utility or firewall rule, which can be used to Wake On LAN on request. What I think - if I want to access my media centre (using for example SSH or HTTP) and it is suspended, is it possible to catch the ICMP packet (saying the machine is offline) and send the WOL packet to wakeup the machine and resend the SSH or HTTP request? Thanks

    Read the article

  • CNAME - how will the url be in the http request

    - by Traveller
    A newbie question regarding dns records Let's say I've configured, abc.example.com - A 10.x.x.x and a CNAME for xyz.example.com CNAME for xyz.example.com - abc.example.com when a user does an http request for xyz.example.com what happens when the request reach the 10.x.x.x server. Will the URL be abc.example.com or xyz.example.com? (trying to find out if virtual host in apache need to be updated) Thanks much

    Read the article

  • Only Execute Code on Certain Requests Java

    - by BillPull
    I am building a little API for class and the teacher supplied us with a link to a tutorial that provided a simple webserver that implements Runnable. I have already written some code that will parse arguments the arguments ( or at least get me the request string ) and some code that will return some simple xml. however I think certain requests like the one for the favicon are sent I think it is messing up my code. I wrapped that in an if else but it does not seem to be working. package server; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.util.*; import java.io.*; import java.net.*; import parkinglots.*; public class WorkerRunnable implements Runnable{ protected Socket clientSocket = null; protected String serverText = null; public WorkerRunnable(Socket clientSocket, String serverText) { this.clientSocket = clientSocket; this.serverText = serverText; } public Boolean authenticateAPI(String key){ //Authenticate Key against Stored Keys //TODO: Create Stored Keys and Compare return true; } public void run() { try { InputStream input = clientSocket.getInputStream(); OutputStream output = clientSocket.getOutputStream(); long time = System.currentTimeMillis(); //TODO: Parse args and output different formats and Authentication //Parse URL Arguments BufferedReader in = new BufferedReader( new InputStreamReader(clientSocket.getInputStream(), "8859_1")); String request = in.readLine(); //Server gets Favicon Request so skip that and goto args System.out.println(request); if ( request != "GET /favicon.ico HTTP/1.1" && request != "GET / HTTP/1.1" && request != null ){ String format = "", apikey =""; System.out.println("I am Here"); String request_location = request.split(" ")[1]; String request_args = request_location.replace("/",""); request_args = request_args.replace("?",""); String[] queries = request_args.split("&"); System.out.println(queries[0]); for ( int i = 0; i < queries.length; i++ ){ if( queries[i] == "format" ){ format = queries[i].split("=")[1]; } else if( queries[i] == "apikey" ){ apikey = queries[i].split("=")[1]; } } if( apikey == "" ){ apikey = "None"; } if( format == "" ){ format = "xml"; } Boolean auth = authenticateAPI(apikey); if ( auth ){ if ( format == "xml"){ // Retrieve XML Document String xml = LotFromDB.getParkingLotXML(); output.write((xml).getBytes()); }else{ //Retrieve JSON String json = LotFromDB.getParkingLotJSON(); output.write((json).getBytes()); } }else{ output.write(("Access Denied - User is Not Authenticated").getBytes()); } }else{ output.write(("Access Denied Must Pass API Key").getBytes()); } output.close(); input.close(); System.out.println("Request processed: " + time); } catch (IOException e) { //report exceptions e.printStackTrace(); } } } Console output I get I am Here format=json Request processed: 1333516648331 GET /favicon.ico HTTP/1.1 I am Here favicon.ico Request processed: 1333516648332 It always returns the XML as well. This is my first exposure to writing a web server and dealing with networking in Java, which frustrates me a lot in general, So any suggestions here are very appreciated.

    Read the article

  • Ubuntu 12.10 not updating. Failed to download repository information

    - by vinay
    I recently tried to update by using the update manager, The update stopped and I received the error message: Failed to download repository information W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/source/Sources 404 Not Found , W:Failed to fetch http://ppa.launchpad.net/deluge-team/ppa/ubuntu/dists/quantal/main/binary-i386/Packages 404 Not Found , E:Some index files failed to download. They have been ignored, or old ones used instead. What is the problem?

    Read the article

  • Are you a SQLBits attendee? get a discount for PASS Europe SQL 2008 R2 Launch

    - by simonsabin
    PASS have given use a number of prizes for SQLBits. We have registrations for PASS Europe and PASS North America as well as DVD sets of the sessions from the North America summit last year to give away. Not only that, if you want to go to PASS Europe and you are a SQLBits user then you can get a promotion code that not only gives you the best price it also raises money for SQLBits. To get the promotion code login and then visit the community page http://www.sqlbits.com/about/Community.aspx  

    Read the article

  • Are you a SQLBits attendee? get a discount for PASS Europe SQL 2008 R2 Launch

    - by simonsabin
    PASS have given use a number of prizes for SQLBits. We have registrations for PASS Europe and PASS North America as well as DVD sets of the sessions from the North America summit last year to give away. Not only that, if you want to go to PASS Europe and you are a SQLBits user then you can get a promotion code that not only gives you the best price it also raises money for SQLBits. To get the promotion code login and then visit the community page http://www.sqlbits.com/about/Community.aspx  

    Read the article

  • Update JQuery Progressbar with JSON Response in an ajax Request

    - by Vincent
    All, I have an AJAX request, which makes a JSON request to a server, to get the sync status. The JSON Request and responses are as under: I want to display a JQuery UI progressbar and update the progressbar status, as per the percentage returned in the getStatus JSON response. If the status is "insync", then the progressbar should not appear and a message should be displayed instead. Ex: "Server is in Sync". How can I do this? //JSON Request to getStatus { "header": { "type": "request" }, "payload": [ { "data": null, "header": { "action": "load", } } ] } //JSON Response of getStatus (When status not 100%) { "header": { "type": "response", "result": 400 }, "payload": [ { "header": { "result": 400 }, "data": { "status": "pending", "percent": 20 } } ] } //JSON Response of getStatus (When percent is 100%) { "header": { "type": "response", "result": 400 }, "payload": [ { "header": { "result": 400 }, "data": { "status": "insync" } } ] }

    Read the article

  • How to fix Jersey POST request parameters warning?

    - by Brabster
    I'm building a very simple REST API using Jersey, and I've got a warning in my log files that I'm not sure about. WARNING: A servlet POST request, to the URI http://myserver/mycontext/myapi/users/12345?action=delete, contains form parameters in the request body but the request body has been consumed by the servlet or a servlet filter accessing the request parameters. Only resource methods using @FormParam will work as expected. Resource methods consuming the request body by other means will not work as expected. My webapp only has the Jersey servlet defined, mapped to /myapi/* How can I stop these warnings?

    Read the article

  • Flex: HTTP request error #2032

    - by alexey
    In Flex 3 application I use HTTPService class to make requests to the server: var http:HTTPService = new HTTPService(); http.method = 'POST'; http.url = hostUrl; http.resultFormat = 'e4x'; http.addEventListener(ResultEvent.RESULT, ...); http.addEventListener(FaultEvent.FAULT, ...); http.send(params); The application has Comet-architecture. So it makes long running requests. While waiting a response for this request, other requests can be made concurrently. The application works in most cases. But sometimes some clients get HTTP request error executing long running request: faultCode:Server.Error.Request faultString:'HTTP request error' faultDetail:'Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032"]. URL: 'http://example.com/ws' I think it depends on user's browser. Any ideas?

    Read the article

  • How to cancel/abort jquery ajax request

    - by user556673
    I've an ajax request which will happen in every 5 seconds. But the problem is before the ajax request if the previous request is not completed I've to abort that request and make a new request. My code is something like this, how to resolve this issue? $(document).ready( var fn = function(){ $.ajax({ url: 'ajax/progress.ftl', success: function(data) { //do something } }); }; var interval = setInterval(fn, 500); ); Thank you.

    Read the article

  • Detect aborted connection during ASIO request

    - by Tim Sylvester
    Is there an established way to determine whether the other end of a TCP connection is closed in the asio framework without sending any data? Using Boost.asio for a server process, if the client times out or otherwise disconnects before the server has responded to a request, the server doesn't find this out until it has finished the request and generated a response to send, when the send immediately generates a connection-aborted error. For some long-running requests, this can lead to clients canceling and retrying over and over, piling up many instances of the same request running in parallel, making them take even longer and "snowballing" into an avalanche that makes the server unusable. Essentially hitting F5 over and over is a denial-of-service attack. Unfortunately I can't start sending a response until the request is complete, so "streaming" the result out is not an option, I need to be able to check at key points during the request processing and stop that processing if the client has given up.

    Read the article

  • SOAPUI Extract data from SOAP Response and use in REST request

    - by Adrian
    I have been looking at the answer to this question: Pulling details from response to new request SoapUI which is similar to what I am looking for but I can't get it to work. I have a small SOAPUI testsuite and I need to extract a value from the response of a SOAP request and then use this value in a subsequent REST request. The response to my SOAP request is: <ns0:session xmlns:ns0="http://www.someurl.com/la/la/v1_0"> <token>AQIC5wM2xAAIwMg==#</token> </ns0:session> so I need the token to use in my REST request. I know it involves using Property Transfer and some XPath / XQuery but I just can't get it right. At the moment my property transfer window points to Source: SOAP test Property: Response and has data(/session/token/text()) in the text box. In target it has Target: REST testcase Property: newProp and I have Use XQuery checked. Any help greatly appreciated. Thanks, Adrian

    Read the article

  • How to By Pass Request Validation

    - by GIbboK
    Hi, I have a GridView and I need update some data inserting HTML CODE; I would need this data been stored encoded and decoded on request. I cannot in any way disable globally "Request Validation" and not even at Page Level, so I would need a solution to disable "Request Validation" at Control Level. At the moment I am using a script which should Html.Encode every value being update, butt seems that "Request Validation" start its job before event RowUpdating, so I get the Error "Page A potentially dangerous Request.Form ... ". Any idea how to solve it? Thanks protected void GridView1_RowUpdating(object sender, GridViewUpdateEventArgs e) { foreach (DictionaryEntry entry in e.NewValues) { e.NewValues[entry.Key] = Server.HtmlEncode(entry.Value.ToString()); } PS I USE Wweb Controls not MVC

    Read the article

  • url to http request object

    - by takeshin
    I need to convert string like this: $url = 'module/controller/action/param1/param1value/paramX/paramXvalue'; to url regarding current router (including translation and so on). Usually I generate the target urls using url view helper, but for this I need to specify all params, so I would need to manually explode the string. I tried to use request object, like this: $request = new Zend_Controller_Request_Http(); // some code here passing the $url Zend_Debug::dump($request->getControllerName()); // null instead of 'controllers' Zend_Debug::dump($request->getParams()); // null instead of array but this seems to be suspected. Do I need to dispatch this request? How to handle this case well?

    Read the article

  • ASP.NET request extension type

    - by Krishna
    Hello, I am working on a large web application which I have recently shelved tons of .aspx pages from the project. To avoid page not found error, I added these entities in the xml which came around 300+ in count. I wrote a http module that checks the request url in the xml entities and if they are found, my module is going to redirect the request to respective new pages. Everything works great, but my collection is getting iterated for all the requests, I mean for each and every .jpg, .css, .js, .ico, .pdf etc. Is there any object or property in .net that can tell the type of request that user requested for like HttpContext.request.type. So that I can avoid checking the request for all unwanted file types.

    Read the article

  • Pattern to iterate Request Params

    - by NOOBie
    My view is not a strongly typed view and I need to iterate through Request Params in the controller action to determine the values posted. Is there a better way to iterate through the nameValueCollection AllKeys? I am currently looping through the Request Params and setting values appropriately. foreach (var key in Request.Params.AllKeys) { if (key.Equals("CustomerId")) queryObject.CustomerId = Request.Params[key]; else if (key.Equals("OrderId")) queryObject.OrderId= Request.Params[key]; //and so on } I see a considerable amount of repetition in this code. Is there a better way to handle this?

    Read the article

  • nginx proxypass content 404s when adding caching location block

    - by Thermionix
    Below is my nginx conf - the location block for adding expires max to content is causing issues with content from the /internal proxied sites. nginx error log; 2011/11/22 15:51:23 [error] 22124#0: *2 open() "/var/www/internal/static/javascripts/lib.js" failed (2: No such file or directory), client: 127.0.0.1, server: example.com, request: "GET /internal/static/javascripts/lib.js?0.6.11RC1 HTTP/1.1", host: "example.com", referrer: "https://example.com/internal/" browser error; lib.js Failed to load resource: the server responded with a status of 404 (Not Found) commenting out the expires max location block allows the proxied sites to work as intended. Config files; proxy.conf location /internal { proxy_pass http://localhost:10001/internal/; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 80; include www.conf; } server { listen 443; include proxy.conf; include www.conf; ssl on; } www.conf root /var/www; server_name example.com; location / { autoindex off; allow all; rewrite ^/$ /mainsite last; } location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { expires max; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_index index.php; include fastcgi_params; if (-f $request_filename) { fastcgi_pass 127.0.0.1:9000; } } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • Having trouble Getting "RTSP over HTTP"

    - by Muhammad Adeel Zahid
    There is an axis camera that is connected to our site (camba.tv) through axis one click connection component (which acts as proxy). We can communicate with this camera only through http by setting the proxy to our OCCC server's address. If we want to get RTSP streams (h.264) we are only left with "RTSP over HTTP" option. For this I have followed axis VAPIX 3 documentation section 3.3. I issue requests through fiddler but don't get any response. But when i put the URL (axrtsphttp://1.00408CBEA38B/axis-media/media.amp) in windows media player (with proxy set to OCCC server 212.78.237.156:3128) the player is able to get RTSP stream over HTTP after logging in. I have created a request trace of communication between camera and windows media player through wireshark and the request that brings the stream looks like http://1.00408cbea38b/axis-media/media.amp HTTP/1.1 x-sessioncookie: 619 User-Agent: Axis AMC Host: 1.00408CBEA38B Proxy-Connection: Keep-Alive Pragma: no-cache Authorization: Digest username="root",realm="AXIS_00408CBEA38B",nonce="000a8b40Y0100409c13ac7e6cceb069289041d8feb1691",uri="/axis-media/media.amp",cnonce="9946e2582bd590418c9b70e1b17956c7",nc=00000001,response="f3cab86fc84bfe33719675848e7fdc0a",qop="auth" HTTP/1.0 200 OK Content-Type: application/x-rtsp-tunnelled Date: Tue, 02 Nov 2010 11:45:23 GMT RTSP/1.0 200 OK CSeq: 1 Content-Type: application/sdp Content-Base: rtsp://1.00408CBEA38B/axis-media/media.amp/ Date: Tue, 02 Nov 2010 11:45:23 GMT Content-Length: 410 v=0 o=- 1288698323798001 1288698323798001 IN IP4 1.00408CBEA38B s=Media Presentation e=NONE c=IN IP4 0.0.0.0 b=AS:50000 t=0 0 a=control:* a=range:npt=0.000000- m=video 0 RTP/AVP 96 b=AS:50000 a=framerate:30.0 a=transform:1,0,0;0,1,0;0,0,1 a=control:trackID=1 a=rtpmap:96 H264/90000 a=fmtp:96 packetization-mode=1; profile-level-id=420029; sprop-parameter-sets=Z0IAKeNQFAe2AtwEBAaQeJEV,aM48gA== RTSP/1.0 200 OK CSeq: 2 Session: 3F4763D8; timeout=60 Transport: RTP/AVP/TCP;unicast;interleaved=0-1;ssrc=060922C6;mode="PLAY" Date: Tue, 02 Nov 2010 11:45:24 GMT RTSP/1.0 200 OK CSeq: 3 Session: 3F4763D8 Range: npt=0- RTP-Info: url=rtsp://1.00408CBEA38B/axis-media/media.amp/trackID=1;seq=7392;rtptime=4190934902 Date: Tue, 02 Nov 2010 11:45:24 GMT [Binary Stream Content] But when i copy this request to fiddler, I only get 200 status code with content-type set to application/x-rtsp-tunneled and there is no stream data. The only thing i do different with stream is to use Basic in authorization header instead of Digest and I do not get 401 (Un authorized) status code. Can anyone explain what's happening here? How can I write request sequences to get stream in fiddler? If it is needed, I can upload the wireshark request dump somewhere.

    Read the article

  • Stop squid caching 302 and 307 with deny_info

    - by 0xception
    TLDR: 302, 307 and Error pages are being cached. Need to force a refresh of the content. Long version: I've setup a very minimal squid instance running on a gateway which shouldn't not cache ANYTHING but needs to be solely used as a domain based web filter. I'm using another application which redirects un-authenticated users to the proxy which then uses the deny_info option redirects any non-whitelisted request to the login page. After the user has authenticated the firewall rule gets placed so they no longer get sent to the proxy. The problem is that when a user hits a website (xkcd.com) they are unauthenticated so they get redirected via the firewall: iptables -A unknown-user -t nat -p tcp --dport 80 -j REDIRECT --to-port 39135 to the proxy at this point squid redirects the user to the login page using a 302 (i've also tried 307, and i've also make sure the headers are set to no-cache and/or no-store for Cache-Control and Pragma). Then when the user logs into the system they get firewall rule which no longer directs them to the squid proxy. But if they go to xkcd.com again they will have the original redirection page cached and will once again get the login page. Any idea how to force these redirects to NOT be cached by the browser? Perhaps this is a problem w/ the browsers and not squid, but not sure how to get around it. Full squid config below. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl localnet src 192.168.182.0/23 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl https port 443 acl http port 80 acl CONNECT method CONNECT # # Disable Cache # cache deny all via off negative_ttl 0 seconds refresh_all_ims on #error_default_language en # Allow manager access only from localhost http_access allow manager localhost http_access deny manager # Deny access to anything other then http http_access deny !http # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !https visible_hostname gate.ovatn.net # Disable memory pooling memory_pools off # Never use neigh cache objects for cgi-bin scripts hierarchy_stoplist cgi-bin ? # # URL rewrite Test Settings # #acl whitelist dstdomain "/etc/squid/domains-pre.lst" #url_rewrite_program /usr/lib/squid/redirector #url_rewrite_access allow !whitelist #url_rewrite_children 5 startup=0 idle=1 concurrency=0 #http_access allow all # # Deny Info Error Test # acl whitelist dstdomain "/etc/squid/domains-pre.lst" deny_info http://login.domain.com/ whitelist #deny_info ERR_ACCESS_DENIED whitelist http_access deny !whitelist http_access allow whitelist http_port 39135 transparent ## Debug Values access_log /var/log/squid/access-pre.log cache_log /var/log/squid/cache-pre.log # Production Values #access_log /dev/null #cache_log /dev/null # Set PID file pid_filename /var/run/gatekeeper-pre.pid SOLUTION: I believe I might have found a solution to this. After days and days trying to figure it out, only through a random stumble I found client_persistent_connections off server_persistent_connections off This did the trick. So it wasn't so much cache as it was a single persistent connection messing things up. W000T!

    Read the article

  • Transparent Squid : Logging client ip problem

    - by llazzaro
    Hello, I am using the following rules in iptables in my network to use a transparent proxy * iptables -t nat -A PREROUTING -i eth0 -s ! squid-box -p tcp --dport 80 -j DNAT --to squid-box:3128 * iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j SNAT --to iptables-box * iptables -A FORWARD -s local-network -d squid-box -i eth0 -o eth0 -p tcp --dport 3128 -j ACCEPT But my squid log, always logs gateway IP (172.16.0.1) Do you know an alternative to not lose client IP? (of course avoid saing manual proxy setup!)

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >