Search Results

Search found 127626 results on 5106 pages for 'http status code 408'.

Page 672/5106 | < Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >

  • uWSGI and python virtual env

    - by user27512
    I'm trying to use uWSGI with a virtual env in order to use the Trac bug tracker on it. I've installed system-wide uwsgi via pip. Next, I've installed trac in a virtualenv $ virtualenv venv $ . venv/bin/activate $ pip install trac I've then written a simple uWSGI configuration script: [uwsgi] master = true processes = 1 socket = localhost:3032 home = /srv/http/trac/venv/ no-site = true gid = www-data uid = www-data env = TRAC_ENV=/srv/http/trac/projects/my_project module = trac.web.main:dispatch_request But when I try to launch it, it fails: $ uwsgi --http :8000 --ini /etc/uwsgi/vassals-available/my_project.ini --gid www-data --uid www-data ... Set PythonHome to /srv/http/trac/venv/ ... *** Operational MODE: single process *** ImportError: No module named trac.web.main unable to load app 0 (mountpoint='') (callable not found or import error) I think uWSGI isn't using the virtual env. When inside the virtual env, I can import trac.web.main without having an ImportError. How can I do that ? Thanks

    Read the article

  • mod_rewrite adds .html when redirecting

    - by user12093810293812031
    I have a redirect situation where the site is part dynamic and part generated .html files. For example, mysite.com/homepage and mysite.com/products/42 are actually static html files Whereas other URLs are dynamically generated, like mysite.com/cart Both mysite.com and www.mysite.com are pointing to the same place. However I want to redirect all of the traffic from mysite.com to www.mysite.com. I'm so close but I'm running into an issue where Apache is adding .html to the end of my URLs for anything where a static .html file exists - which I don't want. I want to redirect this: http://mysite.com/products/42 To this: http://www.mysite.com/products/42 But Apache is making it this, instead (because 42.html is an actual html file): http://www.mysite.com/products/42.html I don't want that - I want it to redirect to www.mysite.com/products/42 Here's what I started with: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)$ http://www.mysite.com/$1 [R=301,L] I tried making the parameters and the .html optional, but the .html is still getting added on the redirect: RewriteCond %{HTTP_HOST} ^mysite\.com$ [NC] RewriteRule ^(.*)?(\.html)?$ http://www.mysite.com/$1 [R=301,L] What am I doing wrong? Really appreciate it :)

    Read the article

  • SSH not working through Double NAT

    - by d_inevitable
    I am trying to setup port forwarding for ssh through 2 NATs The first Router translates my internet IP to my outer network (10.1.7.0). In the outer network there's a second Router that does NAT to my inner network (192.168.1.0). The target server is connected to both, the outer network and the inner network. I cannot change the port forwarding options for outer router. It is currently configured to forward the SSH and HTTP port to the router for the inner network. Internet + | v +-----------------+ +------------------+ | Outer Router | | Inner Router | |-----------------| |------------------| | | SSH HTTP | | +----+ +--------------------->| | | | | | | | | | | | | +-------+---------+ +------+---------+-+ | | | | | | | | | | | | | | +------------------+ | SSH | | | | Server | | | | | |------------------| | | | +-----------> |<-------+ | | | | |HTTP (testing) | +------------------+ | | | +------v------------------+ | | Outer Workstation | +-------------------+ | |-------------------------| | Inner Workstation| | | | |-------------------| | | | | |<----------------+ +-------------------------+ | | +-------------------+ When connecting from a outer workstation to the address of the inner router, then both SSH and HTTP work fine. When connecting from the internet to my public ip with HTTP, the connection works fine as well. However SSH just times out. Most likely because the reply is not routed back properly. I suspect its either because of the SSH itself, or because the server is connected to both, the inner and outer network. Any ideas how I could resolve this issue? The routes on the server are currently: ip route show default via 10.1.7.254 dev eth0 metric 100 10.1.7.0/24 dev eth0 proto kernel scope link src 10.1.7.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.2 Do I have to change this? If so how?

    Read the article

  • ls hangs after NFS server reboot

    - by Apikot
    I've got server A and server B. B acts as an nfs server, A mounts from B. Both are running on EC2. Sometimes I have to shut down B and start a new instance (identical instance). After B is back up, trying to do anything inside the mounted directory on A (ls for example) just hangs. I'm trying to set up a cron that checks the status of the mount, and remounts if anything is wrong. Is there any way to check the status of a mount?

    Read the article

  • Grizzly server - works with IP, but not with domain name

    - by Hitchhiker
    I'm hosting a grizzly web service on a Windows 7 Pro machine (embedded in a regular Java process), and it is binding to http://my-domain-name. When trying to hit the service from another machine, requests to http://my-domain-name fail (fiddler shows error code 502), but requests to http://my-ip work. When the service runs on a Windows Server 2008 machine, this doesn't happen (both requests succeed). What could be the issue?

    Read the article

  • Plesk SiteBuilder Install (UNIX)

    - by Clear.Cache
    Trying to install site builder from Plesk, for Unix. I have Centos 4x I installed fine using tarball files on their site. Ran: * rpm -Uhv updates/*.rpm * rpm -Uhv sitebuilder/*.rpm Documentation: http://download1.parallels.com/SiteBuilder/4.5.0/doc/install/en_US/html/index.htm Now, http://64.64.96.138/login Doesn't work. http://download1.parallels.com/SiteBuilder/4.5.0/doc/install/en_US/html/index.htm?fileName=performing_first_login_to_sitebuilder.htm What am I doing wrong here?

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • Reverse Proxy to filter out js files from multiple hosts in nginx

    - by stwissel
    I have a website http://someplace.acme.com that I want my users to access via http://myplace.mycorp.com - pretty standard reverse proxy setup. The special requirement: any js file - either identified by the .js extension and/or the mime-type (if that is possible) text/javascript needs to be served from a different location, a local tool that inspects the js for potential threats. So I have location / { proxy_pass http://someplace.acme.com; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* \.(js)$ { proxy_pass http://127.0.0.1:8188/filter?source=$1; proxy_redirect off; proxy_buffering off; } The JS still is served from remote and I have no idea how to check for the mime type. What do I miss?

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • How to access Jenkins remotely on Ubuntu 12.04 server?

    - by quincyglenn
    I have installed Jenkins and opened port 8080 on Ubuntu 12.04 server but still can't access Jenkins remotely. Below is the procedures I took. # Install Jenkins, enable UFW and open port 8080 sudo apt-get install jenkins sudo ufw enable sudo ufw allow 8080 sudo ufw reload # Check the status sudo ufw status 8080 ALLOW Anywhere 8080 ALLOW Anywhere (v6) # Locally curl -I localhost:8080 HTTP/1.1 200 OK Server: Winstone Servlet Engine v0.9.10 ... # On an external machine curl -I [ip]:8080 couldn't connect to host

    Read the article

  • IIS6 Log time recording problems

    - by Hafthor
    On three separate occasions on two separate servers at nearly the same times, 6.9 hours seemingly went by without any data being written to the IIS logs, but, on closer inspection, it appears that it was all recorded all at once. Here's the facts as I know them: Windows Server 2003 R2 w/ IIS6 Logging using GMT, server local time GMT-7. Application was still operating and I have SQL data to prove that Time gaps appear in log file, not across two # headers appear at gap Load balancer pings every 30 seconds No caching Here's info on a particular case: an entry appears for 2009-09-21 18:09:27 then #headers the next entry is for 2009-09-22 01:21:54, and so are the next 1600 entries in this log file and 370 in the next log file. about half of the ~2000 entries on 2009-09-22 01:21:54 are load balancer pings (est. at 2/min for 6.9hrs = 828 pings) then entries are recorded as normal. I believe that these events may coincide with me deploying an ASP.NET application update into those machines. Here's some relevant content from the logs in question: ex090921.log line 3684 2009-09-21 17:54:40 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:11 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:55:42 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:13 GET /ping.aspx - 80 404 0 0 3733 122 0 2009-09-21 17:56:45 GET /ping.aspx - 80 404 0 0 3733 122 0 #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-21 18:04:37 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 3078 2009-09-22 01:04:06 GET /ping.aspx - 80 404 0 0 3733 122 109 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 3828 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 278 122 0 ... continues until line 5449 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 <eof> ex090922.log #Software: Microsoft Internet Information Services 6.0 #Version: 1.0 #Date: 2009-09-22 00:00:16 #Fields: date time cs-method cs-uri-stem cs-uri-query s-port sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 ... continues until line 367 2009-09-22 01:04:06 GET /ping.aspx - 80 200 0 0 277 122 0 2009-09-22 01:04:30 GET /ping.aspx - 80 200 0 0 277 122 0 ... back to normal behavior Note the seemingly correct date/time written to the #header of the new log file. Also note that /ping.aspx returned 404 then switched to 200 just as the problem started. I rename the "I'm alive page" so the load balancer stops sending requests to the server while I'm working on it. What you see here is me renaming it back so the load balancer will use the server. So, this problem definitely coincides with me re-enabling the server. Any ideas?

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • SVN Connection Not Successful

    - by user66850
    I am getting the following message when attempting to connect to our company's SVN repository - the same error occurs whether I try from the OSX command line or Eclipse. Any ideas on where to troubleshoot? I can access from other similar computers and others in my team do not have any problem - this issue started occurring on my MacBook Pro yesterday afternoon (no known changes were made to the OS prior to problem starting). $ svn co http://example.ca/cwl/tags/app svn: OPTIONS of 'http://example.ca/cwl/tags/app': Could not read status line: connection was closed by server (http://example.ca)*

    Read the article

  • What to have in sources.list on an Ubuntu LTS server (production)?

    - by nbr
    I have several Ubuntu 10.04 LTS servers in production and I'm using apticron to check that my software is up to date, security-wise. However, by default, Ubuntu has the lucid-updates repository enabled. This means lots of low-priority updates (such as this) that I don't need and thus, extra work for me. Is it okay to just remove the lucid-updates line(s) in sources.list? I still get security updates via lucid-security, right? So, this is what my sources.list would look like. deb http://se.archive.ubuntu.com/ubuntu/ lucid main restricted deb http://se.archive.ubuntu.com/ubuntu/ lucid universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • Nginx redirect all request that does not match a file to a php file

    - by cyrbil
    I'm trying to get all request to: http://mydomain.com/downloads/* redirect to http://mydomain.com/downloads/index.php except if the requested file exist in /downloads/ ex: http://mydomain.com/downloads = /downloads/index.php http://mydomain.com/downloads/unknowfile = /downloads/index.php http://mydomain.com/downloads/existingfile = /downloads/existingfile My current problem is I have either the redirection to php working but static files not served or the opposite. Here is my current vhost conf: (which redirect fine but static files are send to php and fail) server { listen 80; ## listen for ipv4; this line is default and implied server_name domain.com; root /data/www; index index.php index.html; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } location ^~ /downloads { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; try_files $uri @downloads; } location @downloads { rewrite ^ /downloads/index.php; } # pass the PHP scripts to FastCGI server # location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } Precision: static files are symlinks created by /downloads/index.php Thank you for your help.

    Read the article

  • Can GNU sed (for Windows) handle Unicode? If so, is it a code-page/locale issue, or a switch?

    - by Peter.O
    I've been using GNU SED on and off for a couple of years now. It spins me out a bit sometimes, but it does a good job... for single-byte char sets! I now and then notice references to GNU SED being Unicode-aware, but the closest I've seen of this is its "binary" mode.. and binary is not Unicode. Can GSED process a Unicode text file at CodePoint resolution, including and especially \r\n (Windows)... and if it can, does it expect UTF-8, UTF-16, or what? and how does SED detect the encoding?

    Read the article

  • (monit) What does failure "Changed" mean

    - by bresc
    Hi, I installed monit on my server and tried to monitor nginx. check process nginx with pidfile /var/run/nginx.pid start program = "/etc/init.d/nginx start" stop program = "/etc/init.d/nginx stop" group server And I get Process 'nginx' status Changed monitoring status monitored data collected Wed Mar 24 00:37:49 2010 What does "Changed" mean? I couldn't find anything. Thx

    Read the article

  • Lighttpd domain redirection

    - by HTF
    I would like to redirect domains on HTTP/HTTPS: http://old.com -> https://new.com https://old.com -> https://new.com I have to specify the SSL key/certificate for the old domain but I'm not sure where I have to place these directives: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/new.com.pem" ssl.ca-file = "/etc/pki/tls/certs/new.com.crt" } $SERVER["socket"] == ":80" { $HTTP["host"] =~ "old.com|new.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } } I was trying to add the code below but Lighttpd reports configuration errors: $SERVER["socket"] == ":443" { $HTTP["host"] =~ "old.com" { url.redirect = ( "^/(.*)" => "https://new.com:443/$1" ) } ssl.engine = "enable" ssl.pemfile = "/etc/pki/tls/private/old.com.pem" ssl.ca-file = "/etc/pki/tls/certs/old.com.crt" }

    Read the article

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • is there a man in the middle attacking to my server machine?

    - by GongT
    My server works well about half a year. But a strange thing happened (several hours before). This server has two IP-address 58.17.85.19 & 117.21.178.19 When I navigate to http://58.17.85.19, nothing different as before. But http://117.21.178.19 will return a "302 Object moved" and become a "redirect loop" I do some test: ($cmd = "wget http://117.21.178.19/?xx=$RANDOM --max-redirect 0 -S --no-cache -O -") Step by step: run $cmd on my PC and my firend's one (we live in two side of China, far away). - got 302 run $cmd on this server - got 200 OK (content is correct result of index.php) run $cmd on another server in same computer room - got 200 OK telnet from my PC and build an HTTP request (type by hand) - got 200 OK shutdown php-fpm, run $cmd on my PC - got 302 run $cmd on server - 502 Bad Gateway shutdown nginx, run $cmd on both the server and my PC - Connection refused. create iptables rule, refuse any connection to 58.17.85.19:80. run nc -l 80 -k -vvv on server and run $cmd on my PC NC show me that.... Server accept connection (Connection from [my ip]) My connection closed ! (Remove fd xx from list) wget dump out response - got 302 I know that, normaly, NC will accept connection, then dump HTTP request from client, and client will wait for response. this connection will open forever(infact client will close connection becouse timeout), becouse NC can't give any response. So... where my request gone? who send an response to the client? some virus on my server system? If so, why 58.17.85.19 didn't has this error? or... I was attacked by a middleman?

    Read the article

  • Running localhost webapp projects under domain name using fiddler2

    - by user01
    I have a Tomcat server running on my local dev machine(running Windows8) & I use fiddler2 to assign an alias to localhost as my domain name (www.mydomainName.com), so my application webpages open in the browser like this: http://www.mydomainName.com/myAppName/welcome.html instead of http://localhost:8080/myAppName/welcome.html But I want to my webapp pages urls to omit 'myAppName' & be something like : http://www.mydomainName.com/welcome.html How could I configure to do this ?

    Read the article

  • Reversing a mod_rewrite rule

    - by KIRA
    I want to redirect accesses from http://www.domain.com/test.php?sub=subdomain&type=cars to http://subdomain.domain.com/cars I already have mod_rewrite rules to do the opposite: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{HTTP_HOST} !^(www)\. [NC] RewriteCond %{HTTP_HOST} ^(.*)\.(.*)\.com [NC] RewriteRule (.*) http://www.%2.com/index.php?route=$1&name=%1 [R=301,L] What changes do I need to make to these rules to redirect requests from the script to the subdomain?

    Read the article

< Previous Page | 668 669 670 671 672 673 674 675 676 677 678 679  | Next Page >