Search Results

Search found 3420 results on 137 pages for 'precompiled headers'.

Page 76/137 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • haproxy access list using path_dir having issues with firefox

    - by user11243
    I'm trying to route all requests containing a path directory of /socket.io/ to a separate port with HAProxy. Here is my config file: global maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults mode http frontend all 0.0.0.0:80 timeout client 86400000 default_backend web_servers acl is_stream path_dir socket.io use_backend stream_servers if is_stream backend web_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 30000 timeout connect 4000 server web1 127.0.0.1:4000 weight 1 maxconn 1024 check backend stream_servers balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 5000 timeout server 86400000 timeout connect 86400000 server stream1 127.0.0.1:5100 weight 1 maxconn 1024 check URL paths with a /socket.io/ get correctly directed to port 5100 in chrome and safari. However not for firefox. I'm running Haproxy locally on my mac for dev, not sure if it has anything to do with it. I'm using haproxy 1.4.8 and Firefox 3.6.15. I've tried clearing cache on firefox and it didn't help, so I'm thinking there's something wrong with the way HAProxy parses through the Firefox request headers.

    Read the article

  • Is Winpcap able to capture all packets going through a Gigabit NIC without missing any packets?

    - by Patrick L
    I want to use Winpcap to capture all network packets going through a Gigabit NIC of a server. Assuming that I am able to utilize the network link up to 100%, the maximum network speed is 1000Mbps. If we exclude the TCP/IP headers, the maximum TCP data rate should be roughly 940Mbps. Let's say I send a 1GB file through the NIC at 940Mbps using TCP destination port 6000. I use Winpcap to capture all network packets going through the NIC and then dump it to a pcap file. If I use Wireshark to analyze the pcap file and then check the sum of packet size for all network packets sent to TCP port 6000, am I able to get exactly 1GB from the pcap file? Thanks.

    Read the article

  • Allow from referer for HTTP-basic protected SSL apache site

    - by user64204
    I have an apache site protected by HTTP basic authentication. The authentication is working fine. Now I would like to bypass authentication for users that are coming from a particular website by relying on the HTTP Referer header. Here is the configuration: SetEnvIf Referer "^http://.*.example\.org" coming_from_example_org <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Deny from all Allow from env=coming_from_example_org AuthName "login required" AuthUserFile /opt/http_basic_usernames_and_passwords AuthType Basic Require valid-user Satisfy Any </Directory> This is working fine for HTTP, but failing for HTTPS. My understanding is that in order to inspect the HTTP headers, the SSL handshake must be completed, but apache wants to inspect the <Directory> directives before doing the SSL handshake, even if I place them at the bottom of the configuration file. Q: How could I workaround this issue? PS: I'm not obsessed with the HTTP referer header, I could use other options that would allow users from a known website to bypass authantication.

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • Grant relay to servers based on AD security group membership

    - by john
    We're moving our relay from an Exchange 2003 server to an Exchange 2010 server. I was hoping the "Grant or deny relay permissions to specific users or groups" option would still be available in some form, but I can't find out how to do it. I've read up on recieve connectors and so far I can't get it to work. I have edited the security on the Recieve Connector to allow the following extended rights to the group and added computer accounts to that group: Accept Routing Headers Bypass Anti-spam Submit to Server Accept any Sender Accept any Recipient Then I suddenly realised while testing... How would the receive connector resolve the permission to a particular AD object, maybe a reverse DNS lookup? What I'd like to know is if what I'm trying to achieve is possible, and how it would be possible. I would rather not revert to an IP-based list as this is not as manageable, and I'm trying to avoid creating static IPs/reservations for a number of workstations that would otherwise not need them.

    Read the article

  • How to Merge Data From Multiple Excel Files into a Single Excel File or Access Database?

    - by lalabeans
    I have a few dozen excel files which are all of the same format (i.e. 4 worksheets per Excel file). I need to combine all the files into 1 master file which must have just 2 of the 4 worksheets. The corresponding worksheets from each Excel file are named exactly the same as are the column headers. While each file is structured the same, the information within sheet 1 and 2 (for example) is different. So it can’t be combined into one file with everything in one sheet! I've never used VBA before and I'm wondering where I might start this task!

    Read the article

  • Configure spamassassin to mark very similar messages as spam

    - by Caleb Gray
    Users on my server have been receiving spam, consistently, for some time now. I have most of spamassassin's plugins enabled, and I have made sure to enable verbose logging, where I can see that all of the plugins are working. Why is it, then, that my users are able to receive the same exact junk mail several times in a day without the message being flagged in some way? Here are the relevant headers of an email that I personally have received several copies of: X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on sub.domain.tld X-Spam-Level: *** X-Spam-Status: No, score=3.0 required=4.0 tests=BAYES_50,FSL_HELO_NON_FQDN_1, HELO_NO_DOMAIN,HTML_MESSAGE,RCVD_IN_BRBL_LASTEXT,RDNS_NONE autolearn=no version=3.3.1 Have I accidentally missed the plugin that can see that I, and others, have received the same message multiple times in a short period of time?

    Read the article

  • Date header returned by IIS7 is wrong

    - by James Hollingworth
    I am serving an ASP.NET application from IIS 7 but we are experiencing some weird cookie issues. The code works fine in other environments so we are assuming this is specific to this server (related question). We have been looking at the http headers returned and someone pointed out that the date http header is showing the 1st of Jan rather than today's date (so far it always shows that date regardless of what the current date is). The system clock is set correctly (and we can print out the current time/date via DateTime.Now correctly as well) so we can't work out why it's now working. Does anyone have any ideas? Is this a red-herring? Thanks, James

    Read the article

  • NGINX 301 and 302 serving small nginx document body. Any way to remove this behaviour?

    - by anonymous-one
    We have noticed that when using nginx internal 301 and 302 handling, nginx will serve a small document body with the appropriate Location: ... header. Something along the lines of (in html): 301 redirect - nginx. As appropriate in the above behaviour, a content-type text/html and content-length header is also sent. We do a lot of 302 and some 301 redirects, the above behaviour is wasted bandwidth in our opinion. Any way to disable this behaviour? One idea that crossed our mind was to set error_page 301 302 to an empty text file. We have not tested this yet, but I am assuming even with the above, the content-type and content-length (0) headers will be sent. So, is there a clean way to send a "body-less" 301/302 redirect with nginx?

    Read the article

  • How to prove that an email has been sent?

    - by bguiz
    Hi, I have a dispute on my hands in which the other party (landlord's real estate agent) dishonestly claims to not have received an email that I truly did send. My questions is, what are the ways to prove that the email was indeed sent? Thus far, the methods that I have already thought of are: Screenshot of the mail in the outbox Forwarding a copy of the original email I am aware of other things like HTTP/ SMTP headers etc that would exist as well. Are these useful for my purposes, and if so how do I extract these? The email in question was sent using Yahoo webmail ( http://au.mail.yahoo.com/ ). Edit: I am not seeking legal advice here, just technical advice as to how to gather this information.

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • How do I find out what a Spam Custom Rule is?

    - by SoaperGEM
    We use a Barracuda Spam Filter at work, and we also provide a mass emailing program to some of clients that send out newsletters. Lately one of them's been composing his latest company newsletter and has been trying to send preview messages to himself, but they've actually been quarantined by Barracuda as potential spam, even though they aren't. I can see the breakdown of the spam scoring headers in Barracuda, but I'm not sure what certain rules mean. Here's the breakdown: pts rule name description ---- ---------------------- -------------------------------------------------- 0.00 FUZZY_CPILL BODY: Attempt to obfuscate words in spam 2.21 HTML_IMAGE_ONLY_24 BODY: HTML: images with 2000-2400 bytes of words 0.00 HTML_MESSAGE BODY: HTML included in message 0.50 BSF_SC0_SA_TO_FROM_ADDR_MATCH Sender Address Matches Recipient Address 1.00 BSF_SC0_SA392f Custom Rule SA392f What is "Custom Rule SA392f"? Where do I find descriptions of these custom rules? And what does "images with 2000-2400 bytes of words" mean? Is that referring to the file size of the image, or something about the attributes on the <img> tag?

    Read the article

  • Demoted domain controller and now IIS has permissions issues

    - by tladuke
    I have a machine that was a domain controller and everything else, includin an IIS site. I built new 2 new domain controllers and transferred the FSMO roles and waited a day and then demoted the original domain controller. Now the IIS site says : HTTP Error 401.2 - Unauthorized You are not authorized to view this page due to invalid authentication headers. I have a call in with the web app vendor, but maybe someone can guess what I need to fix now. I haven't looked at IIS since this was installed and am pretty lost. I thought about restoring the machine from a backup, but I think that would be an Active Directory disaster, right? The server is Windows 2008 (not R2). The new DCs are 2008 R2

    Read the article

  • auspex LFS backups

    - by user1250465
    I have some backup tapes which existed on an AUSPEX file server. The backups were written to tape with the SunOs version of the CPIO command. Now that I need to restore them, (of course there are no more auspex servers in existance), the backups won't restore because the headers are not standard. I have dumped the tape images to disk. PAX, CPIO, and TAR cannot read the images. I've tried all of the CPIO format options. The errors I get are "name too long", "byte swapped in header", or just junk output. I can open up the images and read the contents of the files, but cannot restore the images. I have found that SunOs had a special header in CPIO V2.5 images. I have found the source for cpio, now I need definition of the SunOs header inside CPIO?

    Read the article

  • Rails time stamps on images in CSS

    - by brad
    Just posted this on Stack but realized it may be more appropriate here: So Rails time stamping is great. I'm using it to add expires headers to all files that end in the 10 digit timestamp. Most of my images however are referenced in my CSS. Has anyone come across any method that allows for timestamps to be added to CSS referenced images, or some funky re-write rule that achieves this? I'd love for ALL images in my site, both inline and in css to have this timestamp so I can tell the browser to cache them, but refresh any time the file itself changes. I couldn't find anything on the net regarding this and I can't believe this isn't a more frequently discussed topic. I don't think my setup will matter because the actual expiring will hopefully happen the same way, based on the 10 digit timestamp, but I'm using apache to serve all static content if that matters

    Read the article

  • How to pipe differently the body of the curl answer and the printed output?

    - by Antoine Lizée
    I would like to print in the command line some output of curl, like the http headers, followed by the body of the answer processed by a stdin/stdout program. For instance: Print the status code: curl -s -w "%{http_code} \\n" -o "/dev/null" http://myURL.com And then process the output with a json parsing tool: curl -s http://myURL.com | python -mjson.tool I would like to do both with one command, and I have the feeling that it may be possible thanks to the -o option that makes the difference between the output of curl and the actual answer from the query. The problem is that -o writes directly to a file. Somebody's got a hack?

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • HTTP Caching Server that supports POST

    - by Jeroen
    I am hosting a REST service which is sending appropriate cache-control headers. I use Varnish as a caching server in front of my webserver. However, a limitation of varnish is that it doesn't support caching HTTP POST and HTTP PUT. Is there any alternate caching server that will be able to cache these requests? I understand that caching POST is a bit tricky because you cannot just cache based on the url as a key like for GET; it needs to actually inspect the request body. In case of multipart/form-data requests, there should probably be a limit on the size of the request body for it to be cached (so that big file uploads, etc won't be cached). Nevertheless I really want to be able to cache short HTTP POST, or at least the application/x-www-form-urlencoded ones.

    Read the article

  • Migrating websites from IIS 6 to Apache Tomcat

    - by Cheeky
    I have 4 CFML coded websites currently running on Railo 3 (CFML engine using Resin) and IIS. I would like to upgrade my server and websites to Railo 4, which no longer uses Resin, but runs on Apache Tomcat. I have quite limited knowledge when it comes to web servers and I wanted to know what the process would be to move the websites from Railo 3 to Railo 4 without disrupting the live sites? Would there still be a need for IIS after the installation of Apache Tomcat? Does Tomcat run through IIS or are they two completely different things and would it be a complete move from IIS to Tomcat? If so, where would one then manage all those things one normally does in IIS, like default documents and host headers etc? Any advice would be extremely appreciated.

    Read the article

  • How can I anonymize my browser useragent, yet still be counted as a FF/Ubuntu user?

    - by Rory
    I read about EFF's Panopticlick project to see how unique your webbrowser's headers are. I would like to anonymize that a bit. My current User Agent is Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.7) Gecko/20100106 Ubuntu/9.10 (karmic) Firefox/3.5.7 I would like to make that more anonymous, however I still wanted to be counted as a Firefox and Ubuntu user. How can I change my User Agent in Firefox? What should I change it to so that it's less unique, but will be counted as a Firefox user and a Ubuntu user on web analytics software? I know that there is no guarantee that I will be counted a Firefox/Ubuntu user, just something that 'works most of the time' would be sufficent.

    Read the article

  • CGI error from PHP when running exec() on IIS

    - by Patrick
    Windows Server 2003 x64 PHP 5.2 IIS 6.0 The program Ink2Png.exe is set with Everyone-Read and Execute permissions. As does its dependency (microsoft.ink.dll) PHP Safe Mode is off exec() is passed [the full exe path], space, [full path to another file] This other file also has full read permissions. The output directory has full write permissions. As soon as exec() is hit, the connection dies, the browser does not even receive a full set of http headers, and it reports a CGI error. Examining the output, it appears the program was not even run. Any ideas? How can I figure out what exactly is happening and get it running again? EDIT: Also, it is a .NET application, if that is significant in any way.

    Read the article

  • Force users to access SSL site using specific host header

    - by mwillmott
    Hi, So i am running IIS7 with one SSL site on it. I have a few different domains and subdomains that all point to my external IP. When using http they all direct to their respective sites using host headers. Whenever someone uses https on any of the domains they all point to my SSL site. I only want people who type in https://sub.domain.com (for example) to end up at my secure site and for anything else to just not go there, it can throw an error or direct to the http version, it doesn't matter. Is there a way of getting IIS7 to check the host header and throw an error if it doesn't match my specific subdomain? Thanks, Michael

    Read the article

  • PHP/mail : server sends email originating from wrong domain

    - by Niro
    I have a Mediatemple dv (Plesk) server with two domains, each has static IP. I had domain1 as main domain and domain2 as secondary. When A PHP script from domain2 sends email the headers show the IP address of domain1 as the origin. Received: from domain2.com (domain1.com [70.ipof domain1]). I want only domain2 to be mentioned so I did the following: Changed server name to domain2.com made domain2.com the primary domain (about 30 hours ago) made fixed IP address of domain2.com the default address for the server. Still when the script sends emails I see the same info as above in the header. What do I need to do to make the email origin domain2.com?

    Read the article

  • Error on error log

    - by Ryan Murphy
    I am trying to use zend framework 2, i follow these instructions on centos6 via ssh. http://framework.zend.com/manual/2.0/en/user-guide/skeleton-application.html and when trying to start my website up, it gives an error, i go to the error log and i get this. [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] SoftException in Application.cpp:357: UID of script "/home/mydomain/public_html/public/index.php" is smaller than min_uid [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] Premature end of script headers: index.php What do they mean, how I fix them?

    Read the article

  • nginx connection reset

    - by Steve
    When first visiting my site after not visiting it for a few minutes, the connection is "reset" 100% of the time. I get this message when debug is turned on, along with a 400 bad request status message: client prematurely closed connection while reading client request line I've read that this could be caused by large_client_header_buffers setting. I have google analytics on my site. Using live http headers, I get this as the request: `GET /__utm.gif?utmwv=5.3.7&utms=35&utmn=745612186&utmhn=domain.com&utmcs=UTF-8&utmsr=1920x1080&utmvp=1841x903&utmsc=24-bit&utmul=en-us&utmje=1&utmfl=11.4%20r402&utmdt=2006Scape%20Forums%20-%20General&utmhid=2004697163&utmr=0&utmp=%2Fservices%2Fforums%2Fboard.ws%3F3%2C4&utmac=UA-25674897-2&utmcc=__utma%3D68455186.1647889527.1351640625.1352446442.1352451659.100%3B%2B__utmz%3D68455186.1352097329.64.2.utmcsr%3Ddomain.com%7Cutmccn%3D(referral)%7Cutmcmd%3Dreferral%7Cutmcct%3D%2Fservices%2Fforums%2Fboard.ws%3B&utmu=q~ HTTP/1.1 my large_client_header_buffers in nginx is set to 4 8k, so I don't know if this is the problem. Immediate requests have the first "reset" request are all successful.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >