Search Results

Search found 14831 results on 594 pages for 'header fields'.

Page 446/594 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • SMTP server closes connection unexpectedly

    - by janin
    I'm writing a python program to send emails, and when trying to send to yopmail, hotmail and some other hosts the connection gets closed by the server without a message. I tried connecting directly with netcat and the same thing happens. Here's what the exchange looks like : $ nc smtp.yopmail.com 25 220 mx.yopmail.com ESMTP *** ehlo mx.myhost.com 250 SIZE 2048000 mail FROM:<[email protected]> 250 OK rcpt TO:<[email protected]> The connection is just closed abruptly at this point. On other hosts, like my ISP's, everything goes fine. I've checked the blacklists but my IP is not listed. Any idea what's going on? Edit: My IP is not listed in any blacklist. I own myhost.com, but I don't have an SPF record. I'll add one and update this post when the record has propagated. Edit 2: with the SPF added the email is now accepted and Hotmail adds a Authentication-Results: hotmail.com; sender-id=pass header to the email. However it gets classified as spam, but I guess that's another matter. Thanks for your help.

    Read the article

  • Nginx + PHP-FPM Timeouts, almost zero load consumption?

    - by javipas
    I've got a server running on a Linode with Ubuntu 10.04 LTS, Nginx 0.7.65, MySQL 5.1.41 and PHP 5.3.2 with PHP-FPM. There is a WordPress blog on it, updated to WordPress 3.2.1 recently. I have made no changes to the server (except updating WordPress) and while it was running fine, a couple of days ago I started having downtimes. I tried to solve the problem, and checking the error_log I saw many timeouts and messages that seemed to be related to timeouts. The server is currently logging this kind of errors: 2011/07/14 10:37:35 [warn] 2539#0: *104 an upstream response is buffered to a temporary file /var/lib/nginx/fastcgi/2/00/0000000002 while reading upstream, client: 217.12.16.51, server: www.mydomain.com, request: "GET /page/2/ HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.mydomain.com/" 2011/07/14 10:40:24 [error] 2539#0: *231 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 46.24.245.181, server: www.mydomain.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mydomain.com", referrer: "http://www.google.es/search?sourceid=chrome&ie=UTF-8&q=mydomain" and even saw this previous serverfault discussion with a possible solution: to edit /etc/php/etc/php-fpm.conf and change request_terminate_timeout=30s instead of ;request_terminate_timeout= 0 The server worked for some hours, and then broke again. I edited the file again to leave it as it was, and restarted again php-fpm (service php-fpm restart) but no luck: the server worked for a few minutes and back to the problem over and over. The strange thing is, although the services are running, htop shows there is no CPU load (see image) and I really don't know how to solve the problem. The config files are on pastebin The php-fpm.conf file is here The /etc/nginx/nginx.conf is here The /etc/nginx/sites-available/www.mydomain.com is here Please help :(

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • gcc built by crosstool-ng gives undefined reference

    - by netvope
    I've successfully built a toolchain using crosstool-ng with the default configuration named x86_64-unknown-linux-gnu. The documentation says: Using the toolchain is as simple as adding the toolchain's bin directory in your PATH, such as: export PATH="${PATH}:/your/toolchain/path/bin" and then using the target tuple to tell the build systems to use your toolchain: ./configure --target=your-target-tuple or make CC=your-target-tuple-gcc or make CROSS_COMPILE=your-target-tuple- and so on... I followed the instructions and attempted to build GNU tar (tar-1.25.tar.bz2) with the toolchain. The commands ./configure --target=x86_64-unknown-linux-gnu and make CROSS_COMPILE=x86_64-unknown-linux-gnu- do not work (the build will succeed, but it uses the host system's gcc). The command make CC=x86_64-unknown-linux-gnu-gcc works, but in the very last step when it tries to link, it returns errors like this: compare.o: In function `openat': /dev/shm/x-tools/x86_64-unknown-linux-gnu/x86_64-unknown-linux-gnu/sys-root/usr/include/bits/fcntl2.h:134: undefined reference to `__openat_2' What could be the problem? Was the toolchain not properly setup? Perhaps x86_64-unknown-linux-gnu-gcc is using the header files from the host system but could not find the libraries in the target's sys-root?

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • Cache-control for permanent 301 redirects nginx

    - by gansbrest
    I was wondering if there is a way to control lifetime of the redirects in Nginx? We would liek to cache 301 redirects in CDN for specific amount of time, let say 20 minutes and the CDN is controlled by the standard caching headers. By default there is no Cache-control or Expires directives with the Nginx redirect. That could cause the redirect to be cached for a really long time. By having specific redirect lifetime the system could have a chance to correct itself, knowing that even "permanent" redirect change from time to time.. The other thing is that those redirects are included from the Server block, which according the nginx specification should be evaluated before locations. I tried to add add_header Cache-Control "max-age=1200, public"; to the bottom of the redirects file, but the problem is that Cache-control gets added twice - first comes let say from the backend script and the other one added by the add_header directive.. In Apache there is the environment variable trick to control headers for rewrites: RewriteRule /taxonomy/term/(\d+)/feed /taxonomy/term/$1 [R=301,E=expire:1] Header always set Cache-Control "store, max-age=1200" env=expire But I'm not sure how to accomplish this in Nginx.

    Read the article

  • Decyphering Seagate drive model numbers?

    - by Stefan Lasiewski
    I'm comparing Seagate's Enterprise and Desktop drives for a variety of old and new servers. These servers come from different generations, so options like size (73GB, 2TB) and interface (SATA vs SAS 3.0Gbps vs SAS 6Gbps vs SCSI Ultra320) are widely variable. I'm trying to compare the sizes, speeds and interfaces, but I'm getting thrown off by different models. Also, their website is not the best. Does anyone know of a documented explanation of the Seagate model numbers? And is there a single spreadsheet which compares the features for all drives (or all 'Enterprise' drives?). Seagate drives have model numbers like this: Model ST3600057SS 6-Gb/s SAS 600 GB None at Cheetah® 15K Hard Drives Model ST373455LW Ultra320 SCSI 73.4 GB 68-Pin LW at Cheetah® 15K Hard Drives Model ST32000644NS SATA 3Gb/s 2 TB None at Constellation™ ES Hard Drives Model ST973452SS 6-Gb/s SAS 73 GB None Savvio® 15K Hard Drives Model ST9200011FS SATA 3Gb/s 200 GB Pulsar™ Solid State Drives I understand the model numbers read something like this: ST - SOMETHING1 - SIZE - SOMETHING2 - INTERFACE Where the fields mean something like this: ST : For 'Seagate'? 'Seagate Technoligies'? SOMETHING1 - This field has number, but I'm not sure what that represents. SIZE - Size in Gigabytes. This is a number like '73' or '300' or '2000' SOMETHING2 - This field also has a number, but I'm not sure what it means. INTERFACE - This field seems to indicate the Interface. 'SS' means SAS, 'FC' means Fibre Channel, but I don't see how to distinguish between 6Gbps SAS and 3Gbps SAS, or different SATA or FC speeds. I don't see a field which indicates the RPM (15K , 10K, 7.2K) etc. Is this part of the model number?

    Read the article

  • Installing mysqlnd for php 5.4.9 on CentOs 6.3

    - by kira423
    Okay let me get straight to the point, I am a complete noob, and have never done stuff like this at all, I have read tutorial after tuorial but I cant get anything to work. When I tried to install the rpm file I got this error rpm -Uvh ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rpms.famillecollet.com/enterprise/6/test/x86_64/php-mysqlnd-5.4.9-1.el6.remi.x86_64.rpm warning: /var/tmp/rpm-tmp.ez4vvd: Header V3 DSA/SHA1 Signature, key ID 00f97f56: NOKEY error: Failed dependencies: php-pdo(x86-64) = 5.4.9-1.el6.remi is needed by php-mysqlnd-5.4.9-1.el6.remi.x86_64 so I tried installing that rpm file and got this error rpm -ivh ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm Retrieving ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm curl: (9) Server denied you to change to the given directory error: skipping ftp://ftp.pbone.net/mirror/rrpms.famillecollet.com/enterprise/6/test/x86_64/php-pdo-5.4.6-1.el6.remi.x86_64.rpm - transfer failed I used the ftp links because I have no idea how else to get them to the server. I think I am getting overly frustrated with this, but I have to get this driver installed for any of my scripts to function correctly. Any help would be greatly appreciated!

    Read the article

  • both ssl and non-ssl on single port

    - by Zulakis
    I would like to make my apache2 webserver serve both http and https on the same port. With the different method i tried it was either not working on http or on https.. How can I do this? Update: If I enable SSL and then visit the with http I get page like this: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://server/"><b>https://server/</b></a></blockquote></p> <hr> <address>Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny16 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g Server at server Port 443</address> </body></html> Because of this, it seems very much possible to have both http and https on the same port. A first step would be to change this default-page so it would present a 301-Moved header. Update2: According to this, it is possible. Now, the question is just how to configure apache to do it.

    Read the article

  • dig lookup different from system lookup

    - by simao
    Hello, I am running dd-wrt and I configured it's dns server to resolve a few hosts inside my network. When I use dig to lookup these hosts, they are resolved OK, but when I try to ping those hosts I always receive an unknown host error message. For example: obe:~ simao$ dig dd-wrt ; <<>> DiG 9.6.0-APPLE-P2 <<>> dd-wrt ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44026 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;dd-wrt. IN A ;; ANSWER SECTION: dd-wrt. 0 IN A 192.168.1.1 But then: obe:~ simao$ ping dd-wrt ping: cannot resolve dd-wrt: Unknown host Any ideas? Thanks.

    Read the article

  • Apache Reverse proxy Http to https

    - by Coppes
    I have a website which is fully running on Https. For some reason i did get the task to find a way to convert a url for example: http://www.domain.com/a/e-nc/youless to a https version of it, without losing HTTP POST header such as the POST values which are in it. So i thought (not even sure) let's try to make a reversed proxy in apache and see how that works. Anyway after a lot of struggling i came to the point to ask it here. So to be speicific my goal is: Convert the http://www.domain.com/a/e-nc/youless to https://www.domain.com/a/e-nc/youless without losing the POST conditions. What i have tried until now is the following: Created a file called: proxiedhosts in my apache2/sites-enabled folder with the following contents: SSLProxyEngine On SSLProxyCACertificateFile /etc/apache2/ssl/certificate****.pem ProxyRequests Off ProxyPreserveHost On <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ ProxyPassReverse /a/e-nc/youless/ https://www.domain.com/a/e-nc/youless/ Thanks in advance!

    Read the article

  • PhpMyAdmin import/export - strange character encoding issues.

    - by John Hunt
    Hello, I'm migrating a site to a new host, and there are a couple of databases on there. There's no SSH access so I'm stuck with phpmyadmin. The issue is that certain characters (namely just whitespace) seems to being corrupt on the new site (same html, and apache doesn't seem to be messing with any encodings - you can see the strange characters have changed when I use less on my linux machine after downloading a table dump from both servers.) The issue isn't as bad if I import into the new database as utf-8 - whitespace characters only have one funny A type symbol instead of two. I've been trying various combinations of character encoding etc to no avail. Exporting from: phpMyAdmin 2.6.2 MySQL 4.1.20 MySQL connection collation: utf8_general_ci MySQL charset: UTF-8 Unicode (utf8) Collation on tables and their fields is: latin1_swedish_ci Importing to: phpMyAdmin - 2.11.9.2 MySQL client version: 5.0.45 MySQL charset: UTF-8 Unicode (utf8) MySQL connection collation: utf8_general_ci The import sql has this kind of thing in it: ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=192 ; I get the impression this is actually a bug or something with mysqldump as nothing seems to work.. does anyone have any insight into this? Cheers, John.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Exim: send every emails with a predefined sender

    - by Gregory MOUSSAT
    We use Exim on our servers to send emails only from local automated users, as root, cron, etc. We have to specify every possible users into /etc/email-addresses. For example: root: [email protected] cron: [email protected] backup: [email protected] This allow us te receive every email generated. The problem is when we add a user for whatever reason (for example when we add a package, some add a user), we can forget to add this user to /etc/email-addresses. Most of the time it's not a problem, but this is not clean. And the overall method is not clean. We'd like to configure Exim to send every emails with the same source address. i.e. every sent email comes from [email protected] One way could be to use a wildcard or a regular expression into /etc/email-addresses but this is not supported. I don't currently understand Exim enought to figure out how to modify this in a way or another. Ideally, Exim should look into /etc/email-addresses first, and if no match it use the predefined address. But this is very secondary. There are two places where this address is used: 1. when Exim send the FROM: command to the smtp server 2. inside the header edit: The rewrite section is the original one from Debian begin rewrite .ifndef NO_EAA_REWRITE_REWRITE *@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs *@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses} \ {$value}fail}" Ffrs .endif (comments removed)

    Read the article

  • 553-Message filtered - HELO Name issue?

    - by g18c
    I am having major issues sending from my SBS2011 machine to Message labs server-13.tower-134.messagelabs.com #553-Message filtered. Refer to the Troubleshooting page at 553-http://www.symanteccloud.com/troubleshooting for more 553 information. (#5.7.1) ## I have changed the IP and hostnames from the below. I am not on any IP or domain blacklists. I have setup SPF (which includes mailchimp servers): v=spf1 mx a ip4:95.74.157.22/32 a:remote.mydomain.com include:servers.mcsv.net ~all I am sure i have setup my HELO names correctly under the Exchange Management console, sending a test email from the SBS server and looking at the header shows the following: X-Orig-To: [email protected] X-Originating-Ip: [95.74.157.22] Received: from [95.74.157.22] ([95.74.157.22:52194] helo=remote.mydomain.com) by smtp50.gate.ord1a.rsapps.net (envelope-from <[email protected]>) (ecelerity 2.2.3.49 r(42060/42061)) with ESMTP id 11/90-10010-E529C835; Mon, 02 Jun 2014 11:04:09 -0400 Received: from MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef]) by MYSBSSVR.mydomain.local ([fe80::3159:95a6:23f:1bef%10]) with mapi id 14.01.0438.000; Mon, 2 Jun 2014 19:03:56 +0400 Is is the main helo name there OK and do i need to worry about the second Received block where the MYSBSVR.mydomain.local is mentioned? I have asked the ISP to set the reverse DNS for my IP to remote.mydomain.com but they have instead put remote.MYDOMAIN.com - would this case cause HELO lookups to classify this as not matching? Anything else I can do to find out why i am being filtered?

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • How to test a HTTPS URL with a given IP address

    - by GreatFire
    Let's say a website is load-balanced between several servers. I want to run a command to test whether it's working, such as curl DOMAIN.TLD. So, to isolate each IP address, I specify the IP manually. But many websites may be hosted on the server, so I still provide a host header, like this: curl IP_ADDRESS -H 'Host: DOMAIN.TLD'. In my understanding, these two commands create the exact same HTTP request. The only difference is that in the latter one I take out the DNS lookup part from cURL and do this manually (please correct me if I'm wrong). All well so far. But now I want to do the same for an HTTPS url. Again, I could test it like this curl https://DOMAIN.TLD. But I want to specify the IP manually, so I run curl https://IP_ADDRESS -H 'Host: DOMAIN.TLD'. Now I get a cURL error: curl: (51) SSL: certificate subject name 'DOMAIN.TLD' does not match target host name 'IP_ADDRESS'. I can of course get around this by telling cURL not to care about the certificate (the "-k" option) but it's not ideal. Is there a way to isolate the IP address being connected to from the host being certified by SSL?

    Read the article

  • Why won't IIS serve my website? - 404 Page Not Found

    - by Giffyguy
    Built a brand new server, with a fresh copy of Windows Server 2003 Enterprise x86 Edition. Installed the .NET Framework 1.1, 2.0, 3.5, and 4.0 Added the "Domain Controller" and "Application Server" roles. Created a new website, pointed it to a local directory: C:\Inetpub\angryoctopus.net\ Added the appropriate headers: angryoctopus.net, www.angryoctopus.net, TCP port 80, all IPs Moved the website content into the local directory. Configured the default document in IIS: Default.aspx Enabled ASP.NET for this website, and set it to the correct version: 2.0.50727 Configured the zone angryoctopus.net in DNS. Tested DNS lookup here to ensure DNS was functional. Opened website in VS 2008 and re-built (and debugged) to ensure the content was functional. I can clearly see that IIS is responding normally, by browsing directly to my server's IP address. Since this does not use the angryoctopus HTTP header, the default website is displayed instead: the "Under Construction" page. And yet, after all of this, angryoctopus.net still returns 404. Does anybody know what could be wrong? What troubleshooting steps have I forgotten? Is there a command-line diagnostic that might provide more information?

    Read the article

  • How should I configure nginx caching headers for a "baked" static file blog? (Octopress)

    - by Doug Stephen
    I recently deployed an Octopress blog (which is a blogging platform built around Jekyll). It's a static-site blog generator, with no dynamic content or databases to muck about with. It's being served up by nginx. My question is, what is the appropriate expires directive or Cache-Control header that I should set to make sure that visitors get the most up-to-date version of the site when they visit without having to manually refresh? Since the site is just .html files it seems to get cached pretty aggressively. I've tried a million different combinations of expires modified + xxxx and even straight up expires off but I can't seem to wrap my head around it. I'm very new to dealing with caching like this, specifically, on static files that change frequently, and obviously if the site hasn't been changed then I'd like for it to be served up out of the cache. Update (still not solved though): I found open_file_cache, tweaked that. Still no dice. It seems like what I might want to do is use nginx as a proxy cache and use Apache with ETags? Is there really no convenient way to make nginx play nicer with conditional requests from the client? TL;DR: I'm running a static-file blog and I'd like to set up nginx to only serve from the cache if the blog hasn't been updated recently, but I'm too stupid to figure it out myself because I'm relatively new to web servers.

    Read the article

  • NTLM, Kerberos and F5 switch issues

    - by G33kKahuna
    I'm supporting an IIS based application that is scaled out into web and application servers. Both web and applications run behind IIS. The application is NTLM capable when IIS is configured to authenticate via Kerberos. It's been working so far without a glitch. Now, I'm trying to bring in 2 F5 switches, 1 in front of the web and another in front of the application servers. 2 F5 instances (say ips 185 & 186) are sitting on a LINUX host. F5 to F5 looks for a NAT IP (say ips 194, 195 and 196). Created a DNS entry for all IPs including NAT and ran a SETSPN command to register the IIS service account to be trusted at HTTP, HOST and domain level. With the Web F5 turned on and with eachweb server connecting to a cardinal app server, when the user connects to the Web F5 domain name, trust works and user authenticates without a problem. However, when app load balancer is turned on and web servers are pointed to the new F5 app domain name, user gets 401. IIS log shows no authenticated username and shows a 401 status. Wireshark does show negotiate ticket header passed into the system. Any ideas or suggestions are much appreciated. Please advice.

    Read the article

  • compile ntp without ssl

    - by Zulakis
    I need to deploy ntp to a very space-critical pxe-imaging-system. (Yes, each KB matters.) Footprint needs to be as small as possible, so I want to compile ntp without linking openssl. According to the manual this is should be possible: If available, the OpenSSL library from http://www.openssl.org is used to support public key cryptography. The library must be built and installed prior to building NTP. The procedures for doing that are included in the OpenSSL documentation. The library is found during the normal NTP configure phase and the interface routines compiled automatically. Only the libcrypto.a library file and openssl header files are needed. If the library is not available or disabled, this step is not required. I already tried out ./configure --without-openssl however, this didn't help. This is my ldd output: ldd ntpd/ntpd linux-gate.so.1 => (0xb7706000) libm.so.6 => /lib/i686/cmov/libm.so.6 (0xb76d5000) libcrypto.so.0.9.8 => /usr/lib/i686/cmov/libcrypto.so.0.9.8 (0xb7582000) librt.so.1 => /lib/i686/cmov/librt.so.1 (0xb7578000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb741d000) /lib/ld-linux.so.2 (0xb7707000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb7419000) libz.so.1 => /usr/lib/libz.so.1 (0xb7404000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb73eb000) The system I am compiling on is 32-bit debian lenny using openssl 0.9.8g-15+lenny16. What is the correct configure option to compile ntp without openssl?

    Read the article

  • WordPress permalinks not working, everything seems fine

    - by javipas
    I have a WordPress blog I've migrated from another CMS, and I've being having a lot of problems with my permalinks structure: lots of articles give a 404, although they are there, somewhere, published. The site is www.muycomputerpro.com (MCP for short), and for example an article that should be found is: http://muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa If I do a search on the search tool at MCP, the result is there (see EnlacesMCP-1.jpg) But when I click on the link, our 404 error page appears (see EnlacesMCP-2.jpg) The weird thing is, the article is published, and the permalink is the right one, as you can see on this screenshot of the WordPress CMS: The permalink (below the title) is correct (http://www.muycomputerpro.com/Actualidad/Especiales/2009-las-grandes-crecen-en-la-bolsa/) but it does not work. In fact, if I try to use the short link (http://www.muycomputerpro.com/?p=5023) the article does not show either. I've accessed my WordPress DB and I've search the article to see if there is something wrong there, but from what I can tell all the fields are OK, here's a screenshot: I really don't know what is causing this. The permalink structure should work (I'm using the "Custom permalink" plugin to preserve the old URLs that had a alphanumeric code at the end of the postname) and the permalink config on wordpress is "/%postname%/". I really need help :(

    Read the article

  • Add single sign-on into existing web app

    - by EvilDr
    Apologies if this isn't the best site, I've search for an answer but can't find anything quite right. I don't actually now the correct terminology I should be using here, so any pointers will be appreciated. I have a web application that accessed by many different users across different organisations. Access is provided by each user having a unique username/password which is stored in SQL (database fields are customerID, userID, username). Some organisations are now asking if we can change this to allow "Active Directory single sign-on" so that users don't need to remember yet another set of login details. From research I can see how this is achieved using OpenAuth and Google (etc), but I know hardly anything about AD and can't find much information on this (again I'm sure it helps when you know the terminology). Is this request even possible to achieve, given that most users will be from different (and unrelated) organisations? I saw on a Microsoft Build video not long ago that there is some kind of replication service for AD to allow Cloud authentication. Is this what I should be aiming for?

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >