Search Results

Search found 53597 results on 2144 pages for 'http requests'.

Page 362/2144 | < Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >

  • iptables question

    - by RubyFreak
    i have a small network, with one valid IP and a firewall with 3 network interfaces (LAN, WAN, DMZ). I want to enable PAT on this valid IP to redirect http traffic to a server in my DMZ. (done) I want to enable MASQ on this ip from traffic that comes from my LAN (done) I want from my LAN as well to access my http server at DMZ. (partially) Question: in the above scenario, i cannot from my LAN, to access my http server in the DMZ, since it has the IP used by the MASQ (the only valid ip that i have). What would be the best option to solve this problem? network interfaces: eth0 (WAN) eth1 (DMZ) eth2 (LAN) /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE /sbin/iptables -A FORWARD --o eth1 -d 2.2.2.2 -p tcp --dport 80 -j ACCEPT /sbin/iptables -t nat -A PREROUTING -i eth0 -d 1.1.1.1 -p tcp --dport 80 -j DNAT --to 2.2.2.2 /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT /sbin/iptables -A FORWARD -i eth2 -o eth0 -j ACCEPT

    Read the article

  • OpenBSD has open ports in default installation

    - by celil
    I have been considering replacing Ubuntu with OpenBSD to improve the security on my local server. I need to have ssh access to it, and I also need it to serve static web content - so the only ports I need open are 22 and 80. However, when I scan my server for open ports after installing OpenBSD 4.8, and enabling ssh and http at /etc/rc.conf httpd_flags="" sshd_flags="" I discovered that it had several other open ports: Port Scan has started… Port Scanning host: 192.168.56.102 Open TCP Port: 13 daytime Open TCP Port: 22 ssh Open TCP Port: 37 time Open TCP Port: 80 http Open TCP Port: 113 ident ssh (22) and http (80) should be open as I enabled httpd and sshd, but why are the other ports open, and should I worry about them creating additional security vulnerabilities? Should they be open in a default installation?

    Read the article

  • How to pipe differently the body of the curl answer and the printed output?

    - by Antoine Lizée
    I would like to print in the command line some output of curl, like the http headers, followed by the body of the answer processed by a stdin/stdout program. For instance: Print the status code: curl -s -w "%{http_code} \\n" -o "/dev/null" http://myURL.com And then process the output with a json parsing tool: curl -s http://myURL.com | python -mjson.tool I would like to do both with one command, and I have the feeling that it may be possible thanks to the -o option that makes the difference between the output of curl and the actual answer from the query. The problem is that -o writes directly to a file. Somebody's got a hack?

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • Is defragging right for me?

    - by blade
    Hi, I am using Hyper-V on my Windows Server 2008 R2 DC x64 machine. I am also using standard SATA drives. I read some threads on here about defraging but could not reach a conclusion about whether or not I should use defragging. Can anyone shed some light on whether this will be right for me? Furthermore, what tool is best? There seems to be 3: http://www.perfectdisk.com/products/business-perfectdisk11-server/key-features http://www.diskeeper.com/diskeeper/home/server-edition.aspx?id=40279&wid=7 http://www.perfectdisk.com/products/business-perfectdisk11-hyper-v/learn-more Anyone have experience with this?

    Read the article

  • Apply SetEnvIf after Apache RewriteRule

    - by coneybeare
    I have a working apache rewrite rule: RewriteCond %{HTTP_HOST} ^.*foo.com RewriteRule (.*) http://bar.com$1 [R=301,QSA,L] and some working dontlog SetEnvIfs: SetEnvIf Request_URI "^/server-status$" dontlog SetEnvIf Request_URI "^/home/ping$" dontlog SetEnvIf Request_URI "^/haproxy-status$" dontlog SetEnvIf User-Agent ".*internal dummy connection.*" dontlog CustomLog /var/log/apache2/access.log combined env=!dontlog but I can't figure out how to stop the RewriteRule from logging a duplicate line. foo.com and bar.com are both on the same machine. I would expect this rule to work, but it did not: SetEnvIf Host "foo.com" dontlog I still get duplicates in the Apache Log: 10.250.18.97 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" 68.194.30.42 - - [06/Apr/2012:16:57:12 +0000] "GET / HTTP/1.1" 200 732 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/534.55.3 (KHTML, like Gecko) Version/5.1.5 Safari/534.55.3" .... where 10.250.18.97 is the server's IP. How can I prevent that RewriteRule from logging?

    Read the article

  • Squid - Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • Need to hookup HP dv7-3085dx with Nvidia GeForce GT 230M to my Dell 30 inch LCD 3007WFP at max resol

    - by user14660
    I recently bought an HP laptop (dv7-3085dx) (http://reviews.cnet.com/laptops/hp-pavilion-dv7-3085dx/4505-3121_7-33776108.html) which is supposed to have a pretty good video card (NVIDIA GeForce GT 230M). The card is supposed to output a max resolution of 2560x1600 which is also the max resolution of my monitor (http://www.ubergizmo.com/15/archives/2006/02/dell_3007wfp_on_dell_2001fp_action_8_megapixel_desktop.html). Now I bought an HDMI to dual link dvi (http://www.amazon.com/gp/product/B002KKLYDK/ref=oss_product) cable...this is after Best Buy's 70 dollar hdmi to dvi (perhaps it was 'single' link?) didn't give me the best resolution. In windows 7, when I try to set the max resolution for my 30 in monitor, I only get 1280x800...which is absurd. The monitor is great, I love the laptop and the video card supposedly supports such resolutions. So I can't figure out why I'm not getting better resolution (by the way, when i "detect" my monitor in windows 7, it is shown correctly as DELL 3007WFP!).

    Read the article

  • mod_rewrite RewriteRule is not working

    - by buggy1985
    Hi, This is a follow-up of this question: Rewrite URL - how to get the hostname and the path? And a copy of this: mod_rewrite RewriteRule is not working I got this Rewrite Rule: RewriteEngine On RewriteRule ^(http://[-A-Za-z0-9+&@#/%=~_|!:,.;]*)/([-A-Za-z0-9+&@#/%=~_|!:,.;]*)\?([A-Za-z0-9+&@#/%=~_|!:,.;]*)$ http://http://www.xmldomain.com/bla/$2?$3&rtype=xslt&xsl=$1/$2.xsl it seems to be correct, and exactly what I need. But it doesn't work on my server. I get a 404 page not found error. mod_rewrite is enabled, as the following simple rule is working fine: RewriteEngine On RewriteRule ^page/([^/\.]+)/?$ index.php?page=$1 [L] Can you help? Thanks

    Read the article

  • How to tell httpd to preserve the proxied error message?

    - by ZNK - M
    I have an httpd server proxying the requests to 2 different tomcat servers. One of my server handles the authentication and returns a specific http error code 521 when the user already have a running session. My issue is httpd automatically maps this 521 error code to a 500 (internal server error) and then my client can not handle it properly. I have tried to disable ProxyErrorOverride, to remove the /error/HTTP_INTERNAL_SERVER_ERROR.html.var but it does not changes anything. How can I ask httpd to not change anything to the proxied message? <IfModule proxy_module> ProxyPass /context1 http://127.0.0.1:8001/context1 ProxyPass /context2 http://127.0.0.1:8002/context2 ProxyPreserveHost Off ProxyErrorOverride Off </IfModule> Thanks in advance httpd 2.2.22 (Win32) mod_ssl tomcat 7.25 windows 7 64-bits

    Read the article

  • Cant register this user

    - by holgero
    I wanted to ask this question on meta, but it said, I have to log in first (which is where I have the problem!) I answered a question with this user. But when I tried to register and click on the stack exchange icon, it only displays three dots (animated) and never comes back. I suspected a firefox problem so I tried firefox on windows: And yes, I was able to create an account linked with my other accounts ( http://unix.stackexchange.com/users/27867/holgero and http://stackoverflow.com/users/1779245/holgero ) when I ran the latest firefox version under windows 7 in a virtual box. Then I upgraded my linux firefox to the newest version and deleted the other account under windows again. But still I cannot register this account ( http://superuser.com/users/177338/holgero ). On unix.stackexchange or stackoverflow I never had any problems with the registration, superuser seems to be different. So, how do I register this user and have it linked with my other stackexchange accounts?

    Read the article

  • RewriteRule applying pattern even though 1 of the RewriteCond's failed

    - by BHare
    #www. domain . tld RewriteCond %{HTTP_HOST} (?:.*\.)?([^.]+)\.(?:[^.]+)$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$ /home/$1/client/media/$2 [L] RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$ /home/$1/www/$2 [L] Here is rewritelog output: #(4) RewriteCond: input='tfnoo.mydomain.org' pattern='(?:.*\.)?([^.]+)\.(?:[^.]+)$' [NC] => matched #(4) RewriteCond: input='/home/mydomain/' pattern='-d' => not-matched #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(2) rewrite 'http://www.mydomain.org/files/images/logo.png' -> '/home/mydomain/www/logo.png' If you note on the 2nd 4 it failed the -d (if directory exists) pattern. Which is correct. mydomain does not have a /home/. Therefore it should never rewrite, atleast according to my understanding that all rewriterules are subject to rewriteconds as logical ANDs.

    Read the article

  • Meta refresh tag not working in (my) firefox?

    - by mplungjan
    Code like on this page does not work in (my) Firefox 3.6 and also not in Fx4 (WinXPsp3) Works in IE8, Safari 5, Opera 11, Mozilla 1.7, Chrome 9 <meta http-equiv=refresh content="12; URL=meta2.htm"> <meta http-equiv="refresh" content="1; URL=http://fully_qualified_url.com/page2.html"> are completely ignored Not that I use such back-button killing things, but a LOT of sites do, possibly including my linux apache it seems when it wants to show a 503 error page... If I firebug or look at generated content, I do not see the refresh tag changed in any way so I am really curious what kind of plugin/addon could block me which is why I googled (in vain) for a known bug... In about:config I have accessibility.blockautorefresh; false so that is not it. I ran in safe mode and OH MY GOD, STACKEXCHANGE IS FULL OF ADS but no redirect

    Read the article

  • SVN Error when connecting from MacBook

    - by user66850
    This is drying me nuts for last 5 days!!! Out of the blue 5 days ago, SVN access from my MacBook Pro failed: I cannot access any SVN (i.e. not in our University or open source projects etc). The error obtain when performing 'svn co', or any other svn commands is shown below. This is same message is obtained irrespective of the svn repository (i.e. it is something due to my Macbook) svn co http://anonsvn.internet2.edu/svn/i2mi/branches/GROUPER_1_6_BRANCH/ svn: OPTIONS of 'http://anonsvn.internet2.edu/svn/i2mi/branches/GROUPER_1_6_BRANCH': Could not read status line: connection was closed by server (http://anonsvn.internet2.edu)

    Read the article

  • Remove trailing slash using redirect directive in vhost

    - by Choy
    I have an issue where urls that end in a "/" after a file name causes css/js to break. I.e., http://www.mysite.com/index.php/ <-- breaks http://www.mysite.com/ <-- OK, only breaks for file names To fix, I tried adding a Redirect 301 directive in the vhost file as such where I'm checking to see if there's an extension with a slash after it: <VirtualHost *:80> ServerName mysite.com Redirect 301 ^(.*?\..+)/$ http://mysite.com/$1 </VirtualHost> The redirect appears to do nothing. Is this an issue with my implementation or is what I'm trying to accomplish not possible with a Redirect 301 in the vhost file?

    Read the article

  • AWS autoscaling. Launch Config/Auto Scaling Group and VPC instance with two ifaces

    - by icalvete
    I want create an Launch Config/Auto Scaling Group to build instances inside an VPC with two subnets ("frontend" and "backend") I need that this instances have two ifaces. One in "frontend" subnet and one in "backend" subnet. I can't see how do that. It's no posible from AWS console and neither with aws cli. http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-launch-configuration.html http://docs.aws.amazon.com/cli/latest/reference/autoscaling/create-auto-scaling-group.html Launch Config don't say nothing about this. http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/create-lc-with-instanceID.html Ideas? Thanks!!!

    Read the article

  • IIS not listening over external network, all other traffic working

    - by Beuy
    Hello there, I have a very odd situation, I have a server (let's call it X) running 2008 R2 with two NIC's in it, one is connected to the work domain and has a subnet of 192.168.10.0/24 the other is connected to a ADSL connection and has a subnet of 192.168.1.0/24. The server has IIS installed. On the ADSL connection I have setup a dynamic dns and port forwarding to allow external HTTP, HTTPS, FTP and RDP connections. FTP and RDP are working fine however neither HTTP or HTTPS are working at all. I can browse the websites by going to localhost on the machine, the HTTP and HTTPS ports appear as "Filtered" when I try to scan them using PortQueryUI and browsers respond with a "Server took too long to load or was not responding" error. This was working fine just a few days ago, Windows firewall is disabled I don't have any software firewall on it. And I'm really lost. Any help would be great.

    Read the article

  • I am trying to write an htaccess file performs authentication and redirects authenticated users to a

    - by racl101
    This is what I have so far but I can't get the RewriteCond and RewriteRule properly. RewriteEngine On RewriteCond %{LA-U:REMOTE_USER} (\d{3})$ RewriteRule !^%1 http://subdomain.mydomain.com/%1 [R,L]. AuthName "My Domain Protected Area" AuthType Basic AuthUserFile /path/to/my/.htpasswd Require valid-user This is what I mean the ReWriteCond and RewriteRule to say: "If the REMOTE_USER has a username ending in 3 digits then capture the three digits that match and for whatever url they are trying to access if it does not start with the 3 digits captured then redirect them to the sub directory with the name equal to those captured three digits." In other words, if a user named 'johnny202' is authenticated then if he's requesting any directory other than http://subdomain.mydomain.com/202/ then he should be redirected to http://subdomain.mydomain.com/202/ The only thing I can think of that is wrong is the first instance of '%1'.

    Read the article

  • How can I diagnose cache misses when using Apache as a reverse proxy?

    - by johnstok
    I have set up Apache 2.2 as a reverse proxy with the following configuration: # jBoss proxying ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /foo http://localhost:9080/foo ProxyPassReverse /foo http://localhost:9080/foo ProxyPassReverseCookiePath /foo /foo # Reverse proxy caching CacheEnable disk /foo # Compression SetOutputFilter DEFLATE BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE\s(7|8) !no-gzip !gzip-only-text/html DeflateCompressionLevel 9 Header append Vary User-Agent env=!dont-vary However, in a number of cases where I expect a cached response to be returned the request is sent through to the origin server at localhost:9080. Responses have a HTTP Vary header of 'Accept-Encoding,User-Agent' which is to be expected given the mod_deflate configuration. How can I determine why Apache is unable to serve a response from the cache?

    Read the article

  • Tunneling HTTPS traffic via a PUTTY/SSL tunnel with SOCKS

    - by ripper234
    I have configured a SOCKS ssh tunnel to a remote proxy, and set my Firefox to use localhost:<port> as a SOCKS proxy. My intention is to tunnel outgoing HTTP/S connections from my machine via a specific 3rd party server I own (on AWS). In my testing, HTTP UTLs are forwarded properly (e.g. when I access http://jsonip.com/ from my computer I do get the server's IP) However, whenever I try to reach an HTTPS address, I get this error: The proxy server is refusing connections How do I debug/fix it? My PUTTY tunnel config is simply (some random source port number + dynamic checked): P.S. I'm aware I might need to manually accept SSL certificates. The reason I'm doing this is to resolve problems using gmail as an outbound SMTP service.

    Read the article

  • how to set auto redirection in tomcat

    - by Registered User
    I have a site http://social.openitup.in right now what you are seeing is a default Tomcat6 page. I am using mod_ajp as a front end and Apache vhost configuration for same is <VirtualHost *:80 > ServerName social.openitup.in ServerAdmin webmaster@localhost ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://192.168.1.19:8009/ ProxyPassReverse / ajp://192.168.1.19:8009/ </VirtualHost> How ever I have an application running on it http://social.openitup.in/olat what I want to do is when some one opens http://social.openitup.in then rather than seeing Tomcat6 home page from /var/lib/tomcat6/webapps/ROOT/index.html the person is redirected to olat application which is in /var/lib/tomcat6/webapps/olat how can this be achived? The above vhost configuration is on a machine separate than where OLAT is running.

    Read the article

  • How to enable Jetty to support cometd/reverse ajax while let it listen to port 80?

    - by janetsmith
    I would like to use cometd / reverse ajax capability of Jetty 7. I tried to configure it so it listen to port 80, instead of 8080. However, according to http://jetty.mortbay.org/jetty5/faq/faq%5Fs%5F200-General%5Ft%5Fapache.html , Apache can be configured as a HTTP/1.1 proxy to pass selected request to the Jetty using the HTTP/1.1 protocol. This is simple to configure and use, but current versions of the apache mod_proxy do not support persistent connections. As far as I know, the reverse ajax in jetty is depending on continuation (I guess it is persistent connection). So how to let jetty support reverse ajax, while coexist with apache server? Thanks.

    Read the article

  • Can't pipe echo to netcat?

    - by user1641300
    I have the following command: echo 'HTTP/1.1 200 OK\r\n' | nc -l -p 8000 -c and when I curl localhost:8000 I am not seeing HTTP/1.1 200 .. being printed. I am on mac os x with netcat 0.7.1 Any ideas? #!/bin/bash trap 'my_exit; exit' SIGINT SIGQUIT my_exit() { echo "you hit Ctrl-C/Ctrl-\, now exiting.." # cleanup commands here if any } if test $# -eq 0 ; then echo "Usage: $0 PORT" echo "" exit 1 fi while true do echo "HTTP/1.1 200 OK\r\n" | nc -l -p ${1} -c done and testing with: curl localhost:8000

    Read the article

  • Cannot SSH into Amazon EC2 instance

    - by edelwater
    I read: Cannot connect to ec2 instance http://stackoverflow.com/questions/5635640/cannot-ssh-into-amazon-ec2-instance Amazon EC2 instance ssh problems etc... But could not fix it: suddenly (after a year of service, no changes on my winscp settings) it gives me "network error connection timed out" (im using ec2-user) (also from the amazon console). Log FILE: http://pastebin.com/vNq6YQvN All Sites that run on it run fine port 22 is allowed (never changed it) (security group) using the correct ec2-user and domain via my winscp / putty i can connect to other hosting (via ssh) update: SOLVED. I spend 2 days without looking at my own IP address .... (since it did not change the past 3 years....). Your comments made the spark in my brain. thank you so much. (and only now i find dozens of discussions from angry users that the static addresses from my provider are changed to dynamic ones: http://gathering.tweakers.net/forum/list_messages/1501005/12 ...)

    Read the article

< Previous Page | 358 359 360 361 362 363 364 365 366 367 368 369  | Next Page >