Search Results

Search found 50980 results on 2040 pages for 'http compression'.

Page 547/2040 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • Some search keywords in the omnibox have stopped working

    - by pinouchon
    My problem in the same as described here: http://productforums.google.com/forum/#!topic/chrome/HE7ak4y92YU When I type a search in the chrome omnibox, after typing enfr [space] chrome replaces it by a search in the history and I end up googling my search prefixed by the keyword : for example, I end up googling "enfr former", instead of translating "former" from engligh into french. (in my case, the search is the folowing name: enfr, keywoard: enfr, search: http://translate.google.fr/#en/fr/%s in the other search engines options) Chrome version: 21.0.1180.89 How do I restore the normal behavior (searching on the specified search engine instead of the default search) ? Update: I have seen that clearing browsing data while I did the search fixes the problem. But I am not sure if it wont reapear.

    Read the article

  • Someone try to hack my site, want to understand the log

    - by garconcn
    I have a wordpress site hosted on CentOS 6. After see the following access log, I checked the server, it seems ok. Can anyone explain what does this guy trying to do? Did they get what they want? I have disabled allow_url_include, and restricted open_basedir to web dir and tmp(/etc is not in the path). 190.26.208.130 - - [05/Sep/2012:21:24:42 -0700] "POST http://my_ip/?-d%20allow_url_include%3DOn+-d%20auto_prepend_file%3D../../../../../../../../../../../../etc/passwd%00%20-n/?-d%20allow_url_include%3DOn+-d%20auto_prepend_file%3D../../../../../../../../../../../../etc/passwd%00%20-n HTTP/1.1" 200 32656 "-" "Mozilla/5.0"

    Read the article

  • Dynamic authentication realms in Apache

    - by Cogsy
    I have a front end server acting as a gateway proxy for many (a dynamic 'many') building monitors with embedded webservers. They are accessed with a URL like: http://www.example.com/monitor1/ http://www.example.com/monitor2/ ... I'm trying to restrict access to these monitors to only the users that own them. So what I need is a way of specifying rights to users or groups for specific directories. The standard auth mechanisms I see in Apache won't work because I need to specify every location. I'd prefer some dynamic map or script. Any suggestions?

    Read the article

  • Enable (or work around) Administrative shares in Windows 8

    - by Brado
    So in using Windows 8, I've discovered that the administrative shares are disabled. There seems to be no easy way to get them re-enabled. Is anyone aware of a work around, or solution? I did not have this issue with Windows 7 After disabling UAC. However in Windows 8 this still doesn't work. This is all I could find, however I am not satisfied with the information provided. http://www.computerperformance.co.uk/win8/windows8-administrative-shares.htm http://www.tomsitpro.com/articles/windows_8-file_sharing-windows_administrative_shares,2-195.html

    Read the article

  • How to sync (or at least view) public / team / shared calendar to Blackberry using BES?

    - by 3rdparty
    Trying to allow 3 people to view and ideally sync (create/edit) common (team) calendar events via Blackberry and hosted Exchange 2007 BES. My understanding is that BES does not support anything other than a users primary calendar to be synced wirelessly. From what I've researched the only supported workflow is for user to create event in public calendar on Outlook and then invites team members individually as optional attendees so event displays in their calendar (and on their Blackberry). I've seen some 3rd party utilities that claim to support syncing of public folders/calendars: Add2Outlook: http://www.diditbetter.com/add2outlook.aspx WICKSoft: http://www.wicksoft.com/contacts_calendars.htm (needs to be installed on local Exchange server) I've also been told I can sync public/other calendars using Desktop Manager, but I need to avoid any tethered sync with this environment. Am I missing an easier workflow here? There must be tens of thousands of BES users that require the ability to/view share a public, shared or team calendar on their Blackberry. How can I solve this?

    Read the article

  • Ways to improve completeness of files for data recovery and scanning?

    - by SteveO
    I am using R-studio for data recovery on one of my ntfs partition. There is a pdf file about 16MB, but the software can only recover 15MB of it. So I am thinking about what ways can be used to improve the quality of scanning and recovery by the software? I am looking around its preferences. I am not quite sure whether there are some adjustable parameters for scanning and recovery which can be fine-tuned to improve the quality? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • nginx: location, try_files, rewrite: Find pattern match in subfolder, else move on?

    - by Nick
    I'd like for Nginx to do the following: If the uri matches the pattern: http://mysite.com/$string/ and $string is not 'KB', and not 'images', look for $string.html in a specific subfolder. If $string.html exists in the subfolder, return it. If it does not exist, move on to the next matching location. $string = {any letters, numbers, or dash} For example, if the user requests: http://mysite.com/test/ It should look for a file called: /webroot/www/myfolder/test.html I've tried variations of: location ~ /[a-zA-Z0-9\-]+/ { try_files /myfolder/$uri.html @Nowhere; } But: It doesn't seem to find the file even when it does exist, and If it fails (which is always right now), it wants to jump to the @nowhere location, rather than moving on and trying to find another location that matches. I'd like for it to consider the current location "not a match" if the file doesn't exist.

    Read the article

  • How does hadoop decide what its nodes hostnames are?

    - by Dan R
    Currently the urls generated by the jobtracker & namenode return either hostnames like bubbles.local or just bubbles. These end up not resolving unless the client machine has specified these in their /etc/hosts file. When I run the hostname command on these machines it returns a hostname complete with the domain (E.G bubbles.example.com) Running a small java test on these machines InetAddress addr = InetAddress.getLocalHost(); byte[] ipAddr = addr.getAddress(); String hostname = addr.getHostName(); System.out.println(hostname); Produces output just like the hostname command. Where else could hadoop be grabbing a hostname to use in its jobtracker / namenode UI? This is occurring in clusters with Hadoop 1.0.3 and 1.0.4-SNAPSHOT from early august. The machines are running CentOS release 5.8 (Final). The generated URLs I'm referring to are like this http://example:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/ or http://example.local:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=/

    Read the article

  • Chrome Residual Redirect to Login Page

    - by Shadow503
    My college redirects people in the dorms to a login page when using an ethernet (or wifi) connection. I am now at home, and certain domains keep redirecting to this login page. I've tried running ipconfig /flushdns and I flushed the chrome's local dns cache as described here: How to clear/flush the DNS cache in Google Chrome?. Interestingly enough, while http://www.reddit.com redirects to the login page, http://www.reddit.com/r/funny works. Firefox works fine for both urls. Is there a way to fix this without deleting all of my cookies? Thanks!

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • Need help troubleshooting highly variable ping times

    - by Elliot.Bradshaw
    I'm at work using Citrix (think Remote Desktop) to connect to client sites. With my job I have to write a fair bit of code while I'm connected remotely via Citrix, so the latency of my internet connection is important. If I'm getting ping times above 250ms, then it becomes almost impossible to scroll, click or type with accuracy. Recently my Comcast business internet has been exhibiting highly variable ping times. If I ping google.com, I'll get pings that range from 9ms all the way up to 1300ms. The problem seems to be at its worst during the hours of 1PM to 4:30PM. Outside of those hours and the variance in pings settles down, mostly between 9ms and 50ms. The signal to noise ratio and upstream power are both fine on my modem--the values are here: http://pastebin.com/D4hWGPXf I ran a trace route from my computer to google.com (the results of which are here: http://pastebin.com/GcdjYvMh) and did another test ping to the IP of the first hop outside of our local network (73.98.44.1)--the variance in ping times existed in exactly the same manner as if I were pinging Google. Connecting directly to the cable modem by CAT5 makes no difference. Here is a screenshot demonstrating the variance of the ping times: http://postimage.org/image/haocdeauv/full/ -- as you can see it can get pretty bad. Three Comcast techs have been out (two of them were here when the problem wasn't happening) and they as well as the regional tier 2 Comcast support were unable to diagnose the problem. I now have a ticket open with tier 3 support, but have yet to hear back from them. Does anyone know what could cause these sorts of problems or have any idea from the traceroute above where it could be originating? The regional tier 2 guy tried to tell me that what I'm seeing is normal--are highly variable ping times like that ever acceptable? Anything I should ask Comcast to do or look at to get this problem fixed? Any tips/advice much appreciated! Edit: This is Comcast cable internet at a small start-up, we've ruled out congestion in our private LAN as a cause (i.e., no one's watching YouTube when the pings become variable). Update: Tier 3 Comcast support advised swapping out the modem, a tech came here today and did that--same problem persists.

    Read the article

  • Convert apache rewrite rules to nginx

    - by Shiyu Sekam
    I want to migrate an Apache setup to Nginx, but I can't get the rewrite rules working in Nginx. I had a look on the official nginx documentation, but still some trouble converting it. http://nginx.org/en/docs/http/converting_rewrite_rules.html I've used http://winginx.com/en/htaccess to convert my rules, but this just works partly. The / part looks okay, the /library part as well, but the /public part doesn't work at all. Apache part: ServerAdmin webmaster@localhost DocumentRoot /srv/www/Web Order allow,deny Allow from all RewriteEngine On RewriteRule ^$ public/ [L] RewriteRule (.*) public/$1 [L] Order Deny,Allow Deny from all RewriteEngine On RewriteCond %{QUERY_STRING} ^pid=([0-9]*)$ RewriteRule ^places(.*)$ index.php?url=places/view/%1 [PT,L] # Extract search query in /search?q={query}&l={location} RewriteCond %{QUERY_STRING} ^q=(.*)&l=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1/%2 [PT,L] # Extract search query in /search?q={query} RewriteCond %{QUERY_STRING} ^q=(.*)$ RewriteRule ^(.*)$ index.php?url=search/index/%1 [PT,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php?url=$1 [PT,L] Order deny,allow deny from all ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /var/run/php5-fpm.sock -pass-header Authorization CustomLog ${APACHE_LOG_DIR}/access.log combined Nginx config: server { #listen 80; ## listen for ipv4; this line is default and implied root /srv/www/Web; index index.html index.php; server_name localhost; location / { rewrite ^/$ /public/ break; rewrite ^(.*)$ /public/$1 break; } location /library { deny all; } location /public { if ($query_string ~ "^pid=([0-9]*)$"){ rewrite ^/places(.*)$ /index.php?url=places/view/%1 break; } if ($query_string ~ "^q=(.*)&l=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1/%2 break; } if ($query_string ~ "^q=(.*)$"){ rewrite ^(.*)$ /index.php?url=search/index/%1 break; } if (!-e $request_filename){ rewrite ^(.*)$ /index.php?url=$1 break; } } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } } I haven't written the original ruleset, so I've a hard time converting it. Would you mind giving me a hint how to do it easily or can you help me to convert it, please? I really want to switch over to php5-fpm and nginx :) Thanks

    Read the article

  • Nginx return 444 depending on upstream response code

    - by Mark
    I have nginx setup to pass to an upstream using proxy pass. The upstream is written to return a 502 http response on certain requests, rather then returning the 502 with all the header I would like nginx to recoginse this and return 444 so nothing is returned. Is this possible? I also tried to return 444 on any 50x error but it doesn't work either. location / { return 444; } location ^~ /service/v1/ { proxy_pass http://127.0.0.1:3333; proxy_next_upstream error timeout http_502; error_page 500 502 503 504 /50x.html; } location = /50x.html { return 444; } error_page 404 /404.html; location = /404.html { return 444; }

    Read the article

  • Serving a default image with nginx

    - by ustun
    I have the following configuration in nginx: location /static/ { root /srv/kose/; expires 2w; access_log off; } location / { proxy_pass http://127.0.0.1:8089; } If a file is not found in /static/, I want to serve a default image, and not proxy_pass to 8089. Currently, it looks for the file in the root for static, if it cannot find it, it tries the proxy. I have tried the following, but it doesn't work. How can I tell nginx to serve the default image? I have also tried try_files to no avail. location /static/ { root /srv/kose/; expires 2w; access_log off; error_page 404 /srv/static/defaultimage.jpg; } location / { proxy_pass http://127.0.0.1:8089; }

    Read the article

  • Lighttpd based server issues crop up when port forwarding

    - by michael
    I have four host computers running lighttpd webservers. they are sitting behind a hspa modem, which each occupying a http port between [81 - 84]. 80 is taken by the modem itself. The port forwarding is setup correctly, however, only a portion of any webpage I request from any of the hosts comes through (they all fails after %20 of the page). If I put the host on port 81 into the dmz, it serves pages fine. The others do not respond to the dmz treatment. Is it possible the web content on the hosts somehow require ports aside from their respective http port? Or is it possible that even though the server.port in the lighttpd_ssl.conf file is set, the individual hosts are still expecting to serve on port 80? I am not familiar with lighttpd, nor did i set them up. they are running on video encoders i purchased. I can grab any files from them required for further information on the problem.

    Read the article

  • Access server by hostname without domain

    - by projectshave
    I want to access services on other machines on my home network with just their hostname. In every browser, "http://machine" fails, but adding a period in "http://machine./" works. Is there a way to avoid adding that extra period? My setup is a router with DD-WRT w/ DNSmasq turned on, Win7 machines and several Ubuntu VMs. nslookup works fine with just hostname. Remote desktop works, but TightVNC needs the extra period. ssh needs the period. As I said, all my browsers need the extra period. I'd prefer a solution that doesn't require manually maintaining the hosts file. Thanks.

    Read the article

  • Why can't we reach some (but not all) external web service via VPN connection?

    - by Paul Haldane
    At work (UK university) we use a set of Windows servers running WS2008R2 and RRAS which offer VPN service to students in our accommodation. We do this to associate the network connections with individuals. Before they've connected to the VPN all they can talk to is the stuff thats needed to setup the VPN and a local web site with documentation on how to connect. Medium term we'll probably replace this but it's what we're using at the moment. VPN on the 2008 servers allocates client a private (10.x) address. Access to external sites is through NAT on the campus routers (same as any other directly connected client on a private address). Non-VPN connections aren't seeing this problem. Older servers run WS 2003 and ISA2004. That setup works but has become unreliable under load. Big difference there was that we were allocating non-RFC1918 addresses to the clients (so no NAT required). Behaviour we're seeing is that once connected to the VPN, clients can reach local web sites (that is sites on the campus network) but only some external sites. It seems (but this may be chance) that the sites we can reach are Google ones (including YouTube). We certainly have trouble reaching Microsoft's Office 365 service (which is a pain because that's where mail for most of our students is). One odd bit of behaviour is that clients can fetch (using wget on a Windows 7 client) http://www.oracle.com/ (which gets a 301 redirect) but hangs when asked to fetch http://www.oracle.com/index.html (which is what the first URL redirects to). Access works reliably if we configure clients to use our local web proxies (Squid). My gut tells me that this is likely to be something in the chain dropping replies either based on HTTP inspection or the IP address in the reply. However I'm puzzled about why we're seeing this with the VPN clients. Plan for tomorrow (when I'm back in the office) is to setup a web server on external connection so that we can monitor behaviour at both ends of the conversation (hoping that the problem manifests itself with our test server). Any suggestions for things we should be looking at?

    Read the article

  • Different color prompts for different machines when using terminal/ssh?

    - by bcrawl
    Hi, I have 5 machines I constantly ssh into to do work. Its getting increasingly frustrating when I am issuing wrong commands on wrong boxes. Luckily I havent done anything bad yet. I wanted to know if there is any hack which I can hardcode which will display my prompt in different colors based on the machine I am ssh into? Such as blue for desktop1, purple for laptop, red for server etc? Is this possible? Currently I am using this command export PS1="\e[0;31m[\u@\h \W]\$ \e[m " taken from here http://www.cyberciti.biz/faq/bash-shell-change-the-color-of-my-shell-prompt-under-linux-or-unix/ but it obviously doesnt work across ssh. Also, if you have any other cool bash tips for helping me ease my sight will be wonderful. I got this tip which colors the man pages. http://linuxtidbits.wordpress.com/2009/03/23/less-colors-for-man-pages/

    Read the article

  • RedirectPermanent vs RewriteRule [R]

    - by notbrain
    I currently have a perm_redirects.conf file that gets included into my apache config stack where I have lines in the format RedirectPermanent /old/url/path /new/url/path It looks like I'm required to use an absolute URL for the new path, e.g.: http://example.com/new/url/path. In the logs I'm getting "incomplete redirect target /new/url/path was corrected to http://example.com/new/url/path." (paraphrased). In the 2.2 docs for RewriteRule, at the bottom they show the following as being a valid redirect, with only the url-paths instead of an abs URL for the right hand side of the redirect: RewriteRule ^/old/url/path(.*) /new/url/path$1 [R] But I can't seem to get that format to work to replicate the functionality of the RedirectPermanent version. Is this possible?

    Read the article

  • How to configure Apache to act as an SSL proxy to an application server?

    - by ripper234
    I have one physical server that runs: an Apache (httpd) server another web server (let's say Tomcat for sake of argument) on port 1234 Can I configure the Apache server to act as a proxy for SSL traffic, while keeping the application server blissfully unaware of SSL? What I imagine is: Traffic to http://myserevr.com/app is redirected to https://myserver.com/app Traffic to https://myserver.com/app is proxied to the application server. My SSL certificate is only installed on the Apache server, not on the Application server Other traffic to the Apache server (http://myserver.com/anotherapp) is served directly from the Apache server What's the best setup to achieve this? (On Ubuntu, if that matters)

    Read the article

  • IIS asks for login/pass when accessed using hostname but not when ‘localhost’ is used. Why?

    - by sb
    Hi All, I have setup IIS on my xp machine and have setup a default homepage (that comes with the IIS installed). It is a help page I think. when I access the page with http : // localhost it works fine (IE/Chrome or FF) but when I access it using http://hostname it prompts for a loging/password and works when I enter my domain id and password on the intranet. I have ensured that "anonymous access" is enabled in the properties window of the default site and "websites" node. I searched stack overflow for similar queries but some indicate I need to change the IE/FF settings to allow "integrated security" etc and some suggest to look at the "log file". I don't want to change the IE setting and there is nothing unusual in the log file of the IIS Server. Can anybody help me figure out why this is happening? thank you sb

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >