Search Results

Search found 21298 results on 852 pages for 'www mechanize'.

Page 775/852 | < Previous Page | 771 772 773 774 775 776 777 778 779 780 781 782  | Next Page >

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Starting php-cgi at boot on mac os x 10.6.8

    - by nikhil
    I'm new to mac os, I have installed and configured nginx with php-fastcgi. I need to run this command in a terminal and keep that terminal open to access php files from my browser. php-cgi -b 127.0.0.1:9000 -q Here's the plist that I wrote by looking up sources on the internet <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Debug</key> <false/> <key>EnvironmentVariables</key> <dict> <key>PHP_FCGI_CHILDREN</key> <string>2</string> <key>PHP_FCGI_MAX_REQUESTS</key> <string>1000</string> </dict> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> <key>UserName</key> <string>nikhil</string> <key>Label</key> <string>php-fastcgi</string> <key>ProgramArguments</key> <array> <string>/usr/bin/php-cgi</string> <string>-b 127.0.0.1:9000</string> <string>-q</string> </array> </dict> </plist> I'm loading it using launchctl load -w ~/Library/LaunchAgents/php-fastcgi.plist without any success, can anyone tell me how this can be done.

    Read the article

  • nginx probably deliering wrong filetype for .css file with php tags

    - by Katai
    And again - NGINX is giving me many Questions today :) Like always, I already tried around for a while, but cant seem to fix this issue: I just configured NGINX to handle my .css files equal to my .php files (to parse PHP tags inside the CSS file). This works perfectly, and the file is found and delivered. I could debug it with FIrebug, and everything is OK (it displays the contents of the .css inside the opened <link> tag). So, everything working, right? Wrong. It has the CSS, but it does not interpret it! What I mean by this: apparently, the file-type of the CSS (or aplication-type, whatever) is wrong. The Page can access the CSS, but doesnt bother at all to actually use it. What I checked / tried: There are no PHP errors inside of the .css, so that one is out The .css is accessible. I can call the URI manually, or check if the included URL finds it: both works The .css has no syntax errors (i switched to a css that just has body {background-color: #000; } It works whitout NGINX I deleted the browser cache / restarted NGINX after config rewrites Here the configuration: server { listen 80; server_name localhost; access_log /var/log/nginx/board.access_log; error_log /var/log/nginx/board.error_log warn; root /var/www/board/public; index index.php; fastcgi_index index.php; location / { try_files $uri $uri /index.php; } location ~ (\.php|\.css)$ { try_files $uri =404; include /etc/nginx/fastcgi_params; #keepalive_timeout 0; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass 127.0.0.1:7777; } } Firebug 'Network' Response Header: Connection keep-alive Content-Encoding gzip Content-Type text/html Date Sat, 16 Jun 2012 10:08:40 GMT Server nginx/1.0.5 Transfer-Encoding chunked X-Powered-By PHP/5.3.6-13ubuntu3.7 I think I just answered my own question. Is the Content-Type text/html the problem? How can I remove that? My personal guess is that I have to use this in some way include /etc/nginx/mime.types; default_type application/octet-stream; But I'm not sure... anyone an idea how to solve this? TLDR; CSS file is delivered correctly, but it doesnt seem to be 'used' as CSS from the browser. (Tested, works on apache)

    Read the article

  • Why does NX Client for Windows silently closes after connection?

    - by pavel
    Hey! I connect remotely to my Ubuntu server from Vista machine. Now I need to run a GUI application on the server (Wireshark). So I decided to use FreeNX server/client to view Ubuntu GUI on Vista I have successfully installed FreeNX on Ubuntu and NX Client on Vista. I was following this guide Unfortunately, now I found myself stuck with the following problem. At the client, the !M logo window appears, but after a few seconds that window just closes, even without showing any error message. Guys, I'm really stuck, please help! Maybe I should have installed some graphical environment on the server? These are the details from NX client, it seems there are no errors. ----------------- Info: Display running with pid '7768' and handler '0x670d24'. NXPROXY - Version 3.4.0 Copyright (C) 2001, 2007 NoMachine. See http://www.nomachine.com/ for more information. Info: Proxy running in client mode with pid '2168'. Session: Starting session at 'Sat Dec 19 10:58:35 2009'. Warning: Connected to remote version 3.3.0 with local version 3.4.0. Info: Connection with remote proxy completed. Info: Using WAN link parameters 768/24/1/0. Info: Using cache parameters 4/4096KB/16384KB/16384KB. Info: Using pack method 'adaptive-9' with session 'kde'. Info: Using ZLIB data compression 1/1/32. Info: Using ZLIB stream compression 1/1. Info: No suitable cache file found. Info: Forwarding X11 connections to display ':0'. Info: Listening to font server connections on port '11000'. Session: Session started at 'Sat Dec 19 10:58:35 2009'. Info: Established X server connection. Info: Using shared memory parameters 0/0K. Session: Terminating session at 'Sat Dec 19 10:58:37 2009'. Session: Session terminated at 'Sat Dec 19 10:58:37 2009'. -----------

    Read the article

  • Core i7 c1e and speedstepping - BSOD on shutdown

    - by DeaconDesperado
    I'm having an interesting problem with my recent Core i7 Digital Audio workstation build that I am curious to see if others have encountered. First, here are the specs on the machine. ASUS P6TD Deluxe Intel X58 Socket LGA1366 MB Intel Core i7-950 3.06Ghz 8M LGA1366 CPU CORSAIR DOMINATOR 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 Western Digital Caviar Black WD5001AALS 500GB Plus a couple ASUS optical drives and a 750W Corsair PSU. Running Windows 7 x64. All this is connected to the nefarious Digi 002 firewire audio interface for use with Pro Tools. I following mostly the specs posted by many other I7 users in the digidesign community who pooled their collective knowledge in this thread. Now after completing my build, I fell victim to the "UD5 squeal" described at that forum thread. So taking the advice posted, I disabled c1e advanced halt state and Intel speed stepping (I would likely have done this anyway to maintain a stable clock, power consumption isn't really a relevant concern on this machine.) I enabled XMP to set the ram timings properly as well. What I am experiencing is a BSOD upon shutdown, but only immediately after windows fully exits and ends all processes. The error is a MACHINE_CHECK_EXCEPTION 0x000000. The funny thing is that it is extremely intermitent and only occurs if the shutdown immediately followed a period of relative idleness. It does not a generate a minidump, I suspect because windows monitoring has terminated by the time this error occurs. No damage is evident and one can simply turn off manually and the system will act as though a proper shutdown had occurred. If anything it is a annoyance, I just want to be certain it is not affecting my long term stability. I have read that the i7 950 does not like DRAM voltages past 1.65, but that they are acceptable if they are within .5 of the BLCK setting. I have tried disabling XMP and setting all timings to auto and the problem still manifests in an identical way. It is suspect that the cpu idleness preceding shutdown is the determining factor, as both c1e and speedstepping are both settings intended to modify handling of this state. Any suggestions or prior experiences would be greatly appreciated. EDIT: The behavior very closely resembles what's described in this thread: http://www.tomshardware.com/forum/12003-63-shut-problem-windows The benign nature of it of is identical. I can't seem to download the hotfix cited there however.

    Read the article

  • How do I configure a site in IIS 7 for SSL with a wildcard certificate?

    - by michielvoo
    We have an Windows 2008 server with IIS 7 to test sites we develop for our clients. Each site has a binding on a subdomain: clienta.example.com clientb.example.com clientc.example.com (* Using example.com to protect the innocent) For one of these sites we now have to test if it works over https. So I have created a wildcard certificate request with *.example.com as the common name. I have received the certificate (issued by PositiveSSL SA) and completed the request. The certificate is now installed in IIS. Now I have added an https binding to the second site with the following settings: type: https IP address: All Unassigned Port: 443 Host name: clientb.example.com SSL certificate: *.example.com Browsing the site over regular http works fine. When I try to browse the site over https I get the following errors (depending on the browser used): Chrome This webpage is not available Error 102 (net::ERR_CONNECTION_REFUSED): Unknown error. Firefox Unable to connect Firefox can't establish a connection to the server at clientb.example.com Firebug says Status: Aborted Internet Explorer Internet Explorer cannot display the webpage I have checked Failed Request Tracing, and according to the log the request was completed with status 200. I have run the SSL Diagnostics Tool with the following result: System time: Fri, 04 Mar 2011 14:04:35 GMT Connecting to 192.168.2.95:443 Connected Handshake: 115 bytes sent Handshake: 3877 bytes received Handshake: 326 bytes sent Handshake: 59 bytes received Handshake succeeded Verifying server certificate, it might take a while... Server certificate name: *.example.com Server certificate subject: OU=Domain Control Validated, OU=PositiveSSL Wildcard, CN=*.example.com Server certificate issuer: C=GB, S=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=PositiveSSL CA Server certificate validity: From 2-3-2011 1:00:00 To 2-3-2012 0:59:59 1:00:00 To 2-3-2012 0:59:59 HTTPS request: GET / HTTP/1.0 User-Agent: SSLDiag Accept:*/* HTTPS: 85 bytes of encrypted data sent HTTPS: 533 bytes of encrypted data received Status: HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found Content-Type: text/html; charset=us-ascii Server: Microsoft-HTTPAPI/2.0 Date: Fri, 04 Mar 2011 14:04:35 GMT Connection: close Content-Length: 315 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"> <HTML><HEAD><TITLE>Not Found</TITLE> <META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD> <BODY><h2>Not Found</h2> <hr><p>HTTP Error 404. The requested resource is not found.</p> </BODY></HTML> HTTPS: server disconnected Final handshake: 37 bytes sent successfully Q: What can I do to make this work?

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • SSL certificate for Oracle Application Server 11g

    - by Easter Sunshine
    I was asked to get an SSL certificate for an "Oracle Application Server 11g" which has a soon-to-expire certificate. Brushing aside the fact that 10g seems to be the newest version, I got a certificate from InCommon, as I usually do without problem (except this is the first time I supplied Oracle Application Server 11g as the software type on the CSR form). On the email containing links to download the certificate, it mentioned: Certificate Details: SSL Type : InCommon SSL Server : OTHER I forwarded the email over to the person responsible for installing it and got a reply that the server type must be Oracle Application Server for the certificate to work (the CN is the same as before). They were unable to install this certificate (no details provided to me) and mentioned they had this issue previously with Thawte when they didn't supply Oracle Application Server as the server type. I don't see any significant difference between the currently installed certificate (working) and the new one I just got signed by InCommon (not working). $ openssl x509 -in sso-current.cer -text shows, with irrelevant information ommitted. Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting cc, OU=Certification Services Division, CN=Thawte Premium Server CA/[email protected] Validity Not Before: Oct 1 00:00:00 2009 GMT Not After : Nov 28 23:59:59 2012 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE X509v3 CRL Distribution Points: Full Name: URI:http://crl.thawte.com/ThawteServerPremiumCA.crl X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication Authority Information Access: OCSP - URI:http://ocsp.thawte.com Signature Algorithm: sha1WithRSAEncryption and $ openssl x509 -in sso-new.cer -text shows Data: Version: 3 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: C=US, O=Internet2, OU=InCommon, CN=InCommon Server CA Validity Not Before: Nov 8 00:00:00 2012 GMT Not After : Nov 8 23:59:59 2014 GMT Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: keyid:48:4F:5A:FA:2F:4A:9A:5E:E0:50:F3:6B:7B:55:A5:DE:F5:BE:34:5D X509v3 Subject Key Identifier: 18:8D:F6:F5:87:4D:C4:08:7B:2B:3F:02:A1:C7:AC:6D:A7:90:93:02 X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Basic Constraints: critical CA:FALSE X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Certificate Policies: Policy: 1.3.6.1.4.1.5923.1.4.3.1.1 CPS: https://www.incommon.org/cert/repository/cps_ssl.pdf X509v3 CRL Distribution Points: Full Name: URI:http://crl.incommon.org/InCommonServerCA.crl Authority Information Access: CA Issuers - URI:http://cert.incommon.org/InCommonServerCA.crt OCSP - URI:http://ocsp.incommon.org Nothing jumps out at me as the reason one would not work so I don't have a specific request for the signer for what to do differently when re-signing.

    Read the article

  • nginx 502 bad gateway - fastcgi not listening? (Debian 5)

    - by Sean
    I have experience with nginx but it's always been pre-installed for me (via VPS.net pre-configured image). I really like what it does for me, and now I'm trying to install it on my own server with apt-get. This is a fairly fresh Debian 5 install. I have few extra packages installed but they're all .deb's, no manual compiling or anything crazy going on. Apache is already installed but I disabled it. I did apt-get install nginx and that worked fine. Changed the config around a bit for my needs, although the same problem I'm about to describe happens even with the default config. It took me a while to figure out that the default debian package for nginx doesn't spawn fastcgi processes automatically. That's pretty lame, but I figured out how to do that with this script, which I found posted on many different web sites: #!/bin/bash ## ABSOLUTE path to the PHP binary PHPFCGI="/usr/bin/php5-cgi" ## tcp-port to bind on FCGIPORT="9000" ## IP to bind on FCGIADDR="127.0.0.1" ## number of PHP children to spawn PHP_FCGI_CHILDREN=10 ## number of request before php-process will be restarted PHP_FCGI_MAX_REQUESTS=1000 # allowed environment variables sperated by spaces ALLOWED_ENV="ORACLE_HOME PATH USER" ## if this script is run as root switch to the following user USERID=www-data ################## no config below this line if test x$PHP_FCGI_CHILDREN = x; then PHP_FCGI_CHILDREN=5 fi ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_CHILDREN" ALLOWED_ENV="$ALLOWED_ENV PHP_FCGI_MAX_REQUESTS" ALLOWED_ENV="$ALLOWED_ENV FCGI_WEB_SERVER_ADDRS" if test x$UID = x0; then EX="/bin/su -m -c \"$PHPFCGI -q -b $FCGIADDR:$FCGIPORT\" $USERID" else EX="$PHPFCGI -b $FCGIADDR:$FCGIPORT" fi echo $EX # copy the allowed environment variables E= for i in $ALLOWED_ENV; do E="$E $i=${!i}" done # clean environment and set up a new one nohup env - $E sh -c "$EX" &> /dev/null & When I do a "ps -A | grep php5-cgi", I see the 10 processes running, that should be ready to listen. But when I try to view a web page via nginx, I just get a 502 bad gateway error. After futzing around a bit, I tried telneting to 127.0.0.1 9000 (fastcgi is listening on port 9000, and nginx is configured to talk to that port), but it just immediately closes the connection. This makes me think the problem is with fastcgi, but I'm not sure what I can do to test it. It may just be closing the connection because it's not getting fed any data to process, but it closes immediately so that makes me think otherwise. So... any advice? I can't figure it out. It doesn't help that it's 1AM, but I'm going crazy here!

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • iptables & allowed port refusing connection

    - by marfarma
    Can you see what I'm doing wrong? On Ubuntu Server 9.1, I'm attempting to allow traffic on port 1143 for a non-privileged IMAP host. Connection is refused when testing with telnet example.com 1143 but connection is allowed testing with telnet example.com 80 from my pc to remote internet hosted server. Both rules appear identical and are located near each other with no rules rejecting connections intervening in the rules file. I can't figure it out. iptables -L returns this: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere REJECT all -- anywhere 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:www ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt ACCEPT tcp -- anywhere anywhere tcp dpt:7070 ACCEPT tcp -- anywhere anywhere tcp dpt:1143 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 5/min burst 5 LOG level debug prefix `iptables denied: ' REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere and my rules file contains this: # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *nat :PREROUTING ACCEPT [3556:217296] :POSTROUTING ACCEPT [6909:414847] :OUTPUT ACCEPT [6909:414847] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8080 COMMIT # Completed on Wed May 26 19:08:34 2010 # Generated by iptables-save v1.4.4 on Wed May 26 19:08:34 2010 *filter :INPUT ACCEPT [1:52] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [1:212] -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 ! -i lo -j REJECT --reject-with icmp-port-unreachable -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -p tcp -m tcp --dport 7070 -j ACCEPT -A INPUT -p tcp -m tcp --dport 1143 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j REJECT --reject-with icmp-port-unreachable -A FORWARD -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j ACCEPT COMMIT # Completed on Wed May 26 19:08:34 2010

    Read the article

  • PPTP VPN Not Working - Peer failed CHAP authentication, PTY read or GRE write failed

    - by armani
    Brand-new install of CentOS 6.3. Followed this guide: http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_1.htm And I got PPTPd running [v1.3.4]. I got the VPN to authenticate users against our Active Directory using winbind, smb, etc. All my tests to see if I'm still authenticated to the AD server pass ["kinit -V [email protected]", "smbclient", "wbinfo -t"]. VPN users were able to connect for like . . . an hour. I tried connecting from my Android phone using domain credentials and saw that I got an IP allocated for internal VPN users [which I've since changed the range, but even setting it back to the initial doesn't work]. Ever since then, no matter what settings I try, I pretty much consistently get this in my /var/log/messages [and the VPN client fails]: [root@vpn2 ~]# tail /var/log/messages Aug 31 15:57:22 vpn2 pppd[18386]: pppd 2.4.5 started by root, uid 0 Aug 31 15:57:22 vpn2 pppd[18386]: Using interface ppp0 Aug 31 15:57:22 vpn2 pppd[18386]: Connect: ppp0 <--> /dev/pts/1 Aug 31 15:57:22 vpn2 pptpd[18385]: GRE: Bad checksum from pppd. Aug 31 15:57:24 vpn2 pppd[18386]: Peer armaniadm failed CHAP authentication Aug 31 15:57:24 vpn2 pppd[18386]: Connection terminated. Aug 31 15:57:24 vpn2 pppd[18386]: Exit. Aug 31 15:57:24 vpn2 pptpd[18385]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: Client 208.54.86.242 control connection finished Now before you go blaming the firewall [all other forum posts I find seem to go there], this VPN server is on our DMZ network. We're using a Juniper SSG-5 Gateway, and I've assigned a WAN IP to the VPN box itself, zoned into the DMZ zone. Then, I have full "Any IP / Any Protocol" open traffic rules between DMZ<--Untrust Zone, and DMZ<--Trust Zone. I'll limit this later to just the authenticating traffic it needs, but for now I think we can rule out the firewall blocking anything. Here's my /etc/pptpd.conf [omitting comments]: option /etc/ppp/options.pptpd logwtmp localip [EXTERNAL_IP_ADDRESS] remoteip [ANOTHER_EXTERNAL_IP_ADDRESS, AND HAVE TRIED AN ARBITRARY GROUP LIKE 5.5.0.0-100] Here's my /etc/ppp/options.pptpd.conf [omitting comments]: name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 192.168.200.42 # This is our internal domain controller ms-wins 192.168.200.42 proxyarp lock nobsdcomp novj novjccomp nologfd auth nodefaultroute plugin winbind.so ntlm_auth-helper "/usr/bin/ntlm_auth --helper-protocol=ntlm-server-1" Any help is GREATLY appreciated. I can give you any more info you need to know, and it's a new test server, so I can perform any tests/reboots required to get it up and going. Thanks a ton.

    Read the article

  • How to start nginx via different port(other than 80)

    - by Zhao Peng
    Hi I am a newbie on nginx, I tried to set it up on my server(running Ubuntu 4), which already has apache running. So after I apt-get install it, I tried to start nginx. Then I get the message like this: Starting nginx: the configuration file /etc/nginx/nginx.conf syntax is ok configuration file /etc/nginx/nginx.conf test is successful [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) That makes sense as Apache is using port 80. Then I tried to modify nginx.conf, I reference some articles, so I changed it like so: server { listen 8080; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } After saving this and try to start nginx again, I still get the same error as previously. I cannot really find a related post about this, could any good people shred some light? Thanks in advance :) ========================================================================= I should post all the content in conf here: user www-data; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; server { listen 81; location / { proxy_pass http://94.143.9.34:9500; proxy_set_header Host $host:81; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Via "nginx"; } } } mail { See sample authentication script at: http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript auth_http localhost/auth.php; pop3_capabilities "TOP" "USER"; imap_capabilities "IMAP4rev1" "UIDPLUS"; server { listen localhost:110; protocol pop3; proxy on; } server { listen localhost:143; protocol imap; proxy on; } } Basically, I changed nothing except adding the server part.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • Windows startup Powershell script not closing after Start-Process

    - by Matthew Phipps
    I've got a Powershell V2.0 startup script for my work computer (XP Professional 64-bit), as follows: start "C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE" -ArgumentList "/recycle" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "https://mail.google.com" sleep -S 2 start "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" -ArgumentList "-new-window https://www.google.com/calendar" sleep -S 2 start "C:\Program Files (x86)\Skype\Phone\Skype.exe" The sleeps are to ensure that the windows appear on the taskbar in the correct order. I run this from a shortcut on my Quick Launch with the following Target: C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\scripts\initialize.ps1 (Yes, this is 2.0: powershell -Version 2.0 works, as does -Version 1.0, but not -Version 3.0) Problem is, the command window stays open until the Firefox windows are closed, which is not what I want. Looking at Process Explorer when I run the script, here's what happens: powershell.exe appears under explorer.exe and the Powershell window appears (with a black background, oddly. But it's not cmd.exe, since when I was debugging the script error messages would appear in red). outlook.exe appears under powershell.exe and the Outlook window appears. firefox.exe appears under powershell.exe and a Firefox window appears. A second firefox.exe appears under powershell.exe and another Firefox window appears. The second Firefox process then exits, as expected, since Firefox only uses one process. skype.exe appears under powershell.exe and the Skype window appears. The powershell.exe process inexplicably sticks around, as does the Powershell window. If I close both Firefox windows, the powershell.exe process exits and the Powershell window closes, and the outlook.exe and skype.exe processes appear under explorer.exe as expected. I suspect this has something to do with Firefox's standard input, output and error: I wouldn't expect Outlook or Skype to ever output anything to the console, but Firefox has command-line options that allow it to do so. I've looked over my about:config's user set values and didn't find anything suspicious. Finally, if I have a firefox.exe instance already running (started from the desktop shortcut) the problem doesn't occur (the powershell.exe process exits as it ought to). So what's going on here? I'm going to try adding -WindowStyle hidden to the shortcut next (gotta close this Firefox to test it), but I want to get to the bottom of this, if only to improve my understanding of how Windows consoles work.

    Read the article

  • High load on X3220 Quad Core Linux Apache server

    - by John Templar
    I'm seriously in need of help. My sites are now nearly impossible to use because of massive loads on my server. I'm already a month late on my mortgage and this really isn't helping my situation. I've been working on fixing this intermittent load problem for months (never this bad). I'm suspecting some kind of attack since I'm under DDOS attack a lot! I've been trying to figure out what is causing the load but I'm afraid I just don't have the experience or knowledge to understand all the data I've been looking at. I don't even know where to begin or how to test for the large array of attacks out there. Here's some data you might find useful... Server: Xeon X3220 Quad Core 2.4 GHz - Linux, FreeBSD 500 GB HD and 8 Gig of Ram. Runs Centos release 5.7 Server Version: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_qos/9.74 Warning: All sites are softcore adult sites - mostly fantasy art like elves and amazons. 1) Sites may run fine for weeks or just days at less than 10 load then start jumping to 40-80 load - no idea why. Same sites, same mods, same amount of traffic - just WHAM! 2) I get an email almost every day that says: "Large Number of Failed Login Attempts from IP (different each time)". My webhost (who almost never helps me) told me it was a udp flood or something. 3) I've changed the port for MySQL from the default. If I ever put it back to the default - I get Loads of over 100 from what must be a constant mysql port flood. 4) I've reconfigured MYSQL. Link: http://www.deadlyamazons.com/logs/mycnf.txt 5) I have 3 Joomla Jomsocial networks. I've spent a couple weeks turning all the mods/plugins off, waiting a day and then turning them back on the next day or later if there isn't any change (there hasn't been). For example, on Thursday I'll turn off videos, on Friday I'll turn off chat.. etc and nothing changes the load appreciably. 6) Joomla info: All SEF turned off - sh404sef completely disabled and removed. Components: Joomla 1.5.22, Jomsocial 2.0.5, Kunena 1/31/2011, HWDMediashare 11/22/2010 and JBolo Chat 2.7.3, Comet Chat or Envolve Chat. Page Compression is on, Cache is on 15 mins. Please click on this forum to see links to all my reports: http://forum.joomla.org/viewtopic.php?f=433&t=706035&p=2777500#p2777500 Any help would be highly appreciated.

    Read the article

  • Can't connect to svnserve on localhost - connection actively refused

    - by RMorrisey
    When I try to connect using Tortoise to my SVN server using: svn://localhost/ Tortoise tells me: "Can't connect to host 'localhost'. No connection could be made because the target machine actively refused it." How can I fix this? I am trying to set up a subversion server on my local PC for personal use. I am running Windows Vista, with SlikSVN and TortoiseSVN installed. I previously had everything working correctly, but I found that I couldn't merge(!), apparently due to a version mismatch between the SVN client and server. Anyway... I now have the following setup: I created a repository using svnadmin create; it resides at C:\svnGrove C:\svnGrove\conf\svnserve.conf (# comments omitted): [general] anon-access=read auth-access=write password-db=passwd #authz-db=authz realm=svnGrove C:\svnGrove\conf\passwd: [users] myname=mypass My Subversion Server service is pointed to: C:\Program Files\SlikSvn\bin\svnserve.exe --service -r C:\svnGrove It shows the TCP/IP service as a dependency. I have also tried running svnserve from the command line, with similar results. The below is provided by the 'about' option in TortoiseSVN: TortoiseSVN 1.6.10, Build 19898 - 32 Bit , 2010/07/16 15:46:08 Subversion 1.6.12, apr 1.3.8 apr-utils 1.3.9 neon 0.29.3 OpenSSL 0.9.8o 01 Jun 2010 zlib 1.2.3 The following is from svn --version on the command line (not sure why it says CollabNet, CollabNet was the previous SVN binary that I had set up. The uninstaller failed to remove everything gracefully): svn, version 1.6.12 (SlikSvn/1.6.12) WIN32 compiled Jun 22 2010, 20:45:29 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: * ra_neon : Module for accessing a repository via WebDAV protocol using Neon. - handles 'http' scheme - handles 'https' scheme * ra_svn : Module for accessing a repository using the svn network protocol. - with Cyrus SASL authentication - handles 'svn' scheme * ra_local : Module for accessing a repository on local disk. - handles 'file' scheme * ra_serf : Module for accessing a repository via WebDAV protocol using serf. - handles 'http' scheme - handles 'https' scheme I disabled my Windows Firewall and CA Internet Security, without success in resolving the issue. Edit The old version of svnserve was still set up as a service after the uninstall, pointed to this path: C:\Program Files\Subversion\svn-win32-1.4.6\bin I edited the registry key for the service to point to the new path (shown above). Whether I run svnserve as a service, or using -d, I do not see an entry for that port number in the listing generated by netstat -anp tcp.

    Read the article

  • Port forwarding using a BT Home Hub 2.0 (Supplied to new BT Infinity Customers in the UK)

    - by Jasarien
    I don't usually have trouble with port forwarding, I've been able to do it successfully on a number of different routers, including Linksys, Belkin, Netgear and Apple (Time Capsule / Airport Extreme). So I'm quite confused here. I had been using my Apple Time Capsule as my router for a few years now, with several port mappings all working fine. But it died recently, so I've had to resort to using the BT Home Hub 2.0 that was supplied with my BT Infinity broadband subscription. The forwarding interface for the Home Hub is simplified for the most part, allowing you to select an application or game and assign it to a particular computer on the network which you choose from a list that the Home Hub has 'discovered'. My Mac Pro has a manually assigned static IP 192.168.1.4 and my router is static at 192.168.1. I have chosen SSH from the list of applications and assigned it to my Mac Pro (the only computer in the list currently). The Home Hub also has a feature to keep a DNS service updated, and I have set it to keep my external IP address updated on my hostname. This is how I had it setup in the past with other routers and not had trouble before. I am able to ping my hostname (and external IP) from outside the network and get a response. But when I try to connect using SSH, the connection times out. The Home Hub also has "Firewall settings". The currently selected setting is: Default: Allow all outgoing connections and block all incoming traffic. Games and application sharing is allowed. But I've tried changing it to: Disabled: All traffic is allowed to pass through your BT Home Hub to your devices. Note: you’ll still need to use the games and application sharing feature to make sure that certain applications work properly. And the connection still times out... So frustrating. The OS X firewall on my Mac is disabled, so I don't think that's in the way. I have tried setting the port forwarding manually, instead of relying on the preset "SSH" option (incase it's not using the port I expect). So I set up my own "application" (as the Home Hub calls it) and forwarded external port 22 TCP to internal port 22 TCP to 192.168.1.4 - but that just gives the same result - unable to connect. Next, with the router's firewall disabled and OS X's firewall disabled, I ran the Shields Up test (https://www.grc.com/x/ne.dll?bh0bkyd2) and the result was that all my service ports (0 - 1055) are in 'Stealth' mode. I.e. nothing even exists at my IP as far as any outsider is concerned... Strange. The only thing that seems to work is setting my Mac Pro as the DMZ - which I don't want to do for obvious reasons. Any help with this would be extremely appreciated, thanks.

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • Postfix / Dovecot and Email Retrieval

    - by Eric J.
    I have setup Postfix and Dovecot on an Ubuntu box following the instructions http://www.exratione.com/2012/05/a-mailserver-on-ubuntu-1204-postfix-dovecot-mysql/ I can see that email is being delivered to and accepted by the server, but the email is not available for retrieval via POP3. What could be missing in my configuraton? It seems that email is not being properly handed off to Dovecot. Here are what I believe are the relevant /var/log/mail.log entries for an attempt to send email from another domain (hosted by Gmail) to the domain I have setup: Logged during SMTP connection postfix/smtpd[14689]: connect from mail-vb0-f50.google.com[209.85.212.50] postfix/smtpd[14689]: Anonymous TLS connection established from mail-vb0-f50.google.com[209.85.212.50]: TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits) postfix/smtpd[14689]: 5782740ACF: client=mail-vb0-f50.google.com[209.85.212.50] postfix/cleanup[14696]: 5782740ACF: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 5782740ACF: from=<[email protected]>, size=1947, nrcpt=1 (queue active) postfix/smtpd[14702]: connect from mail.destinationdomain.com[127.0.0.1] postfix/smtpd[14702]: 2940A41AA9: client=mail.destinationdomain.com[127.0.0.1] postfix/cleanup[14696]: 2940A41AA9: message-id=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com> postfix/qmgr[14687]: 2940A41AA9: from=<[email protected]>, size=2450, nrcpt=1 (queue active) amavis[21309]: (21309-02) Passed CLEAN, [209.85.212.50] <[email protected]> -> <[email protected]>, Message-ID: <CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>, mail_id: W52ZB8FAAA+8, Hits: -0.101, size: 1946, queued_as: 2940A41AA9, [email protected], 784 ms postfix/smtpd[14702]: disconnect from mail.destinationdomain.com[127.0.0.1] postfix/smtp[14698]: 5782740ACF: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=1.1, delays=0.29/0.01/0/0.79, dsn=2.0.0, status=sent (250 2.0.0 from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 2940A41AA9) postfix/qmgr[14687]: 5782740ACF: removed dovecot: lda([email protected]): msgid=<CAEjmKcjHnTY4yk=3QXoNrD76=04g-s9utPguTFB02Fx53GMPmw@mail.gmail.com>: saved mail to INBOX postfix/pipe[14703]: 2940A41AA9: to=<[email protected]>, relay=dovecot, delay=0.08, delays=0.02/0.02/0/0.04, dsn=2.0.0, status=sent (delivered via dovecot service) postfix/qmgr[14687]: 2940A41AA9: removed Logged during POP3 retrieval attempts dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.220.135, lip=10.195.83.10, mpid=14706 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=1/2557, del=1/1, size=2540 postfix/smtpd[14689]: disconnect from mail-vb0-f50.google.com[209.85.212.50] dovecot: pop3-login: Login: user=<[email protected]>, method=PLAIN, rip=209.85.212.31, lip=10.195.83.10, mpid=14708 dovecot: pop3([email protected]): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

< Previous Page | 771 772 773 774 775 776 777 778 779 780 781 782  | Next Page >