Search Results

Search found 30326 results on 1214 pages for 'html encode'.

Page 512/1214 | < Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >

  • performance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Repackage m4v file

    - by Daniel Huckstep
    I have some m4v files I made with Handbrake where the AC3 audio channel was the first one, and the stereo was the second. This causes problems with some things (like Quicktime) so I want to repackage the file such that the stereo track is the first audio track. I don't want to re-encode things. Can I do this? Can I do it for free?

    Read the article

  • Ripping software for Mac [closed]

    - by CT
    Possible Duplicate: Best Mac OS X DVD Ripper So handbrake will rip dvds and encode them, but what if they are protected? Is there any application out there which will remove any digital rights protection? What are others prefered methods for backing up a dvd library to mac? Please include what software packages you use for reading, ripping, encoding, or playback. Thanks

    Read the article

  • NGINX Remove index.php /index.php/something/more/ to /something/more

    - by Gaston
    I'm trying to clean urls in NGINX using framework DooPHP. This = - http://example.com/index.php/something/more/ To This = - http://example.com/something/more/ I want to remove (clean url) the "index.php" from the url if someone try to enter in the first form. Like a permanent redirect. How to do this config on NGINX? Thanks. [Update: Actual nginx config] server { listen 80; server_name vip.example.com; rewrite ^/(.*) https://vip.example.com/$1 permanent; } server { listen 443; server_name vip.example.com; error_page 404 /vip.example.com/404.html; error_page 403 /vip.example.com/403.html; error_page 401 /vip.example.com/401.html; location /vip.example.com { root /sites/errors; } ssl on; ssl_certificate /etc/nginx/config/server.csr; ssl_certificate_key /etc/nginx/config/server.sky; if (!-e $request_filename){ rewrite /.* /index.php; } location / { auth_basic "example Team Access"; auth_basic_user_file config/htpasswd; root /sites/vip.example.com; index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /sites/vip.example.com$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } }

    Read the article

  • Apache + PHP via FastCGI

    - by Wilco
    I'm running into some problems while attempting to run PHP via FastCGI in Apache. I have the FastCGI module loaded, but get the following error when attempting to load a page: The requested URL /fastcgi/php54.fcgi/index.php was not found on this server. Somewhere, it seems that the script to be executed is appended to the executable without any spaces. Is this where the problem likely is? Below I've included snippets from my Apache configuration (hopefully this is enough): LoadModule fastcgi_module libexec/apache2/mod_fastcgi.so FastCgiIpcDir /var/run/fastcgi AddHandler fastcgi-script .fcgi FastCgiConfig -autoUpdate -singleThreshold 100 -killInterval 300 AddType application/x-httpd-php .php ScriptAlias /fastcgi/ /Library/WebServer/FCGI-Executables/ <Directory "/Library/WebServer/FCGI-Executables"> Options +ExecCGI SetHandler fastcgi-script Order allow,deny Allow from all <VirtualHost *:80> ServerName www.somedomain.com ServerAdmin [email protected] DocumentRoot "/Web/www.somedomain.com" DirectoryIndex index.html index.php default.html CustomLog /var/log/apache2/access_log combinedvhost ErrorLog /var/log/apache2/error_log Action application/x-httpd-php /fastcgi/php54.fcgi <IfModule mod_ssl.c> SSLEngine Off SSLCipherSuite "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM" SSLProtocol -ALL +SSLv3 +TLSv1 SSLProxyEngine On SSLProxyProtocol -ALL +SSLv3 +TLSv1 </IfModule> <Directory "/Web/www.somedomain.com"> Options All -Indexes +ExecCGI +Includes +MultiViews AllowOverride All <IfModule mod_dav.c> DAV Off </IfModule> <IfDefine !WEBSERVICE_ON> Deny from all ErrorDocument 403 /customerror/websitesoff403.html </IfDefine> </Directory> </VirtualHost> ... and this is the executable: #!/bin/sh PHP_FCGI_CHILDREN=1 PHP_FCGI_MAX_REQUESTS=5000 export PHP_FCGI_CHILDREN export PHP_FCGI_MAX_REQUESTS exec /opt/local/bin/php-cgi54

    Read the article

  • Unable to watch a high resolution and high frame rate video

    - by Abhijith Madhav
    I have a video of a tennis match whose Resolution = 1280 * 720 Codec = H264 Frame rate = 50fps (Copy paste from info given by totem media player) My laptop is not powerful enough to play this video smoothly. How can I reduce the frame-rate of this video so that my laptop can play it? I have observed that my laptop can play videos with 25fps without an issue. I use ubuntu. I wouldn't mind using windows to edit/re-encode this video.

    Read the article

  • Best video codec for filmed powerpoint presentation

    - by rslite
    I have some presentations that are filmed. The audio is the presenter and the video is all the Powerpoint slides (size 1024x768, video codec H264, audio codec AAC). I would like to reduce their final file size since a 1 hour presentation is about 800 MB. Most of it is the video part which as I said is mostly powerpoint slides that don't change much over a matter of several seconds. Which codec would be better suited to encode this images and reduce the size of the end file?

    Read the article

  • wget hangs in http request sent awaiting response in some sites

    - by gkr
    Using Ubuntu 12.04. wget hangs in http request sent, awaiting response... in some sites. Browser's are not opening sites that are failed in wget. But in WinXP everything works. This works gkr@gkr-desktop:~/Documents/curl$ wget google.com --2012-06-12 21:29:37-- http://google.com/ Resolving google.com (google.com)... 74.125.236.174, 74.125.236.160, 74.125.236.161, ... Connecting to google.com (google.com)|74.125.236.174|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: http://www.google.com/ [following] --2012-06-12 21:29:38-- http://www.google.com/ Resolving www.google.com (www.google.com)... 74.125.236.179, 74.125.236.180, 74.125.236.176, ... Connecting to www.google.com (www.google.com)|74.125.236.179|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.co.in/ [following] --2012-06-12 21:29:38-- http://www.google.co.in/ Resolving www.google.co.in (www.google.co.in)... 74.125.236.184, 74.125.236.191, 74.125.236.183, ... Connecting to www.google.co.in (www.google.co.in)|74.125.236.184|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.html.3' [ ] 13,383 --.-K/s in 0.04s 2012-06-12 21:29:39 (308 KB/s) - `index.html.3' saved [13383] gkr@gkr-desktop:~/Documents/curl$ This site just stops/hangs in awaiting response. gkr@gkr-desktop:~/Documents/curl$ wget grooveshark.com --2012-06-12 21:27:29-- http://grooveshark.com/ Resolving grooveshark.com (grooveshark.com)... 8.20.213.76 Connecting to grooveshark.com (grooveshark.com)|8.20.213.76|:80... connected. HTTP request sent, awaiting response... ^C gkr@gkr-desktop:~/Documents/curl$ Thanks

    Read the article

  • Creative CMV files converter

    - by viraptor
    Is there any application in linux-land that can encode files into the Creative's own .cmv? I tried their C.Centrale, but it doesn't seem to work well through Wine. I need one that will work with Zen MX, which doesn't accept the original xvids / wmvs.

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • IIS ASP Redirect Removal

    - by Kim L
    We have a website that is setup on IIS 7 and are trying to replace it with a new site, but need a redirect that is in place removed. The old site used a custom file as the homepage (WN-main.asp). We removed all the old site files, including web.config, and placed them in a subdirectory for safe keeping. The new site no longer uses ASP, and we'd like to use a regular index.html as the default. However, when we go to the website, it keeps trying to redirect our .com to .com/WN-main.asp -- and that gives us a 404 Error in the Application for "Default Web Site" because we removed that page. In the IIS "Default Document" settings we have index.html at the top, and WN-main.asp is nowhere to be found in the list (it never was there). We've also removed the web.config file from the root directory, and put the entire old website in a subdirectory. As well as restarted IIS. We're assuming that the redirect is setup somewhere in IIS because if I navigate to .com/index.html which is our new site, it works. Our problem is that oursite.com redirects to oursite.com/WN-main.asp. Grr. If you go to www.worzalla.com you can see how it redirects to the WN-main.asp page right now as the homepage. Any ideas where this redirect could have been setup so we can remove it? Thanks!

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • How to configure IIS 7.5 to allow special chars in Url for ASP.NET 3.5?

    - by Sebastian P.R. Gingter
    I'm trying to configure my IIS 7.5 to allow specials chars in the url for ASP.NET. This is important to support wide-spread legacy url's on a new system. Sample url: http://mydomain.com/FileWith%inTheName.html This would be encoded in the url and requested as http://mydomain.com/FileWith25%inTheName.html This simply works, when creating a new web in IIS 7.5, placing a file with the percentage sign in the file name in the web root and pointing the browser to it. This does not work, however, when the web site is an ASP.NET application. ASP.NET always returns a 400.0 - Bad Request error in the WindowsAuthentication module from the StaticFile handler, when pointing to that url. It however displays the requested url correctly and also resolves correctly to the correct physical file (the information from the field 'Physical Path' from the Server error page points to the physically available file). There are hints on how to enable this, so I followed the instructions on these websites step by step: http://dirk.net/2008/06/09/ampersand-the-request-url-in-iis7/ http://adorr.net/2010/01/configure-iis-to-accept-url-with-special-characters.html The second one actually sums up the information from the first post and adds some more information about x64 systems (we're running x64) and on an additional web.config change for this. I tried all that, and still can't get this running from an asp.net web application. And yes: I rebooted after applying the registry changes. So, what do I have to do in addition to the settings described in above posts, to support the legacy url's which contain percentage characters? Additional info: Application Pool mode is integrated. Push after some days. No idea anyone?

    Read the article

  • Encoding multiple video streams with a single avconv invocation

    - by automatthias
    I played with avconv on Ubuntu and I'm now able to e.g. record the desktop with sound from a soundcard. One thing I wanted to do was recording two video inputs at the same time, for instance the desktop and from the webcam. I thought about doing something like this: avconv \ -f alsa \ -i default \ -acodec flac \ -f video4linux2 \ -r 6 \ -i /dev/video0 \ -f x11grab \ -i :0.0 \ out.mkv My thinking was that if you define multiple video inputs, and the .mkv format can handle multiple video streams, avconv will encode 2 video streams and 1 audio stream into one file. But this isn't what happens: avconv version 0.8.4-6:0.8.4-0ubuntu0.12.10.1, Copyright (c) 2000-2012 the Libav developers built on Nov 6 2012 16:51:11 with gcc 4.7.2 [alsa @ 0x1091bc0] capture with some ALSA plugins, especially dsnoop, may hang. [alsa @ 0x1091bc0] Estimating duration from bitrate, this may be inaccurate Input #0, alsa, from 'default': Duration: N/A, start: 1354364317.020350, bitrate: N/A Stream #0.0: Audio: pcm_s16le, 48000 Hz, 2 channels, s16, 1536 kb/s [video4linux2 @ 0x10923e0] Estimating duration from bitrate, this may be inaccurate Input #1, video4linux2, from '/dev/video0': Duration: N/A, start: 100607.724745, bitrate: 29491 kb/s Stream #1.0: Video: rawvideo, yuyv422, 640x480, 29491 kb/s, 6 tbr, 1000k tbn, 6 tbc [x11grab @ 0x107b2a0] device: :0.0+83,87 -> display: :0.0 x: 83 y: 87 width: 854 height: 480 [x11grab @ 0x107b2a0] shared memory extension found [x11grab @ 0x107b2a0] Estimating duration from bitrate, this may be inaccurate Input #2, x11grab, from ':0.0+83,87': Duration: N/A, start: 1354364318.488382, bitrate: 196761 kb/s Stream #2.0: Video: rawvideo, bgra, 854x480, 196761 kb/s, 15 tbr, 1000k tbn, 15 tbc Incompatible pixel format 'bgra' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x107fcc0] w:854 h:480 pixfmt:bgra [avsink @ 0x10bdf00] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x10dc680] w:854 h:480 fmt:bgra -> w:854 h:480 fmt:yuv420p flags:0x4 Output #0, matroska, to '.../out.mkv': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 854x480, q=2-31, 4000 kb/s, 1k tbn, 15 tbc Stream #0.1: Audio: libvorbis, 48000 Hz, 2 channels, s16 Stream mapping: Stream #2:0 -> #0:0 (rawvideo -> mpeg4) Stream #0:0 -> #0:1 (pcm_s16le -> libvorbis) Press ctrl-c to stop encoding [mpeg4 @ 0x10bd800] rc buffer underflow ^Cframe= 160 fps= 15 q=2.0 Lsize= 3414kB time=10.66 bitrate=2623.0kbits/s video:3273kB audio:131kB global headers:4kB muxing overhead 0.165600% Received signal 2: terminating. I'm not sure if it's the question of mapping (some -map options to add?) or that avconv just can't encode more than 1 video stream at one time. So is it an actual avconv limitation, or a limitation of the available containers, or me simply not finding the right combination of command line options?

    Read the article

  • Mysql - innoDb - is in the future! Current system log sequence number

    - by Ward Loockx
    I have this error in my syslog. I restored an older dump to solve this problem, after some reading. Now The errors in syslog are far less than before. But still a few times an hour I get Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96637 log sequence number 7 1223357717 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96638 log sequence number 8 150027924 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Sep 1 14:23:29 homer mysqld: 120901 14:23:29 InnoDB: Error: page 96639 log sequence number 7 4208567151 Sep 1 14:23:29 homer mysqld: InnoDB: is in the future! Current system log sequence number 6 647303887. Sep 1 14:23:29 homer mysqld: InnoDB: Your database may be corrupt or you may have copied the InnoDB Sep 1 14:23:29 homer mysqld: InnoDB: tablespace but not the InnoDB log files. See Sep 1 14:23:29 homer mysqld: InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html Sep 1 14:23:29 homer mysqld: InnoDB: for more information. Anybody that knows how I can track what database is causing this issue? And how to fix?

    Read the article

  • Forward real IP through Haproxy => Nginx => Unicorn

    - by Hendrik
    How do I forward the real visitors ip adress to Unicorn? The current setup is: Haproxy => Nginx => Unicorn How can I forward the real IP address from Haproxy, to Nginx, to Unicorn? Currently it is always only 127.0.0.1 I read that the X headers are going to be depreceated. http://tools.ietf.org/html/rfc6648 - how will this impact us? Haproxy Config: # haproxy config defaults log global mode http option httplog option dontlognull option httpclose retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 # Rails Backend backend deployer-production reqrep ^([^\ ]*)\ /api/(.*) \1\ /\2 balance roundrobin server deployer-production localhost:9000 check Nginx Config: upstream unicorn-production { server unix:/tmp/unicorn.ordify-backend-production.sock fail_timeout=0; } server { listen 9000 default; server_name manager.ordify.localhost; root /home/deployer/apps/ordify-backend-production/current/public; access_log /var/log/nginx/ordify-backend-production_access.log; rewrite_log on; try_files $uri/index.html $uri @unicorn; location @unicorn { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://unicorn-production; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; } error_page 500 502 503 504 /500.html; client_max_body_size 4G; keepalive_timeout 10; }

    Read the article

  • Relation between server_name in nginx sites-available, /etc/hosts file and A-records

    - by user2818584
    I have the following two server-blocks in my config-file in sites-available: server { listen 80; server_name www.mydomain.be; root /usr/share/nginx/html; index index.html index.htm; location / { try_files $uri $uri/ =404; } } server { listen 80; server_name sub.mydomain.be; root /usr/share/nginx/sub; index index.html index.htm; location / { try_files $uri $uri/ =404; } } I also created an A-record for both www.domain.be and sub.domain.be with the IP of my server as value. Yet, when I try to reload my nginx configuration with service nginx reload it fails. When I remove the second server-block, it reloads as expected. I know this topic is popular, and that there are loads of such [nginx][subdomain] questions here, but none of them seems to discuss explicitly how the following three things hang together: virtual hosts or server blocks in nginx (est. server_name matching) the effect of A-records on how nginx processes requests the need to add hosts to /etc/hosts Right now I have the impression that a lack of knowledge of this bigger picture, rather than specific knowledge of nginx configuration prevents me from making this work.

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • Nginx configuration question

    - by Pockata
    Hey guys, i'm trying to make the autoindex feature only run for my ip address with this code: server{ ... autoindex off; ... if ($remote_addr ~ ..*.*) { autoindex on; } ... } But it doesn't work. It gives my a 403 :/ Can someone help me :) Btw, i'm using Debian Lenny and Nginx 0.6 :) EDIT: Here's my full configuration: server { listen 80; server_name site.com; server_name_in_redirect off; client_max_body_size 4M; server_tokens off; # log_subrequest on; autoindex off; # expires max; error_page 500 502 503 504 /var/www/nginx-default/50x.html; # error_page 404 /404.html; set $myhome /bla/bla; set $myroot $myhome/public; set $mysubd $myhome/subdomains; log_format new_log '$remote_addr - $remote_user [$time_local] $request ' '"$status" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Star nginx :@ access_log /bla/bla/logs/access.log new_log; error_log /bla/bla/logs/error.log; if ($remote_addr ~ 94.156.58.138) { autoindex on; } # Subdomains if ($host ~* (.*)\.site\.org$) { set $myroot $mysubd/$1; } # Static files # location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { # access_log off; # expires 30d; # } location / { root $myroot; index index.php index.html index.htm; } # PHP location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $myroot$fastcgi_script_name; include fastcgi_params; } # .Htaccess location ~ /\.ht { deny all; } } I forgot to mention that when i add the code to remove static files from my access log, the static files cannot be accessed. I don't know if it's relevant :)

    Read the article

  • NGINX rewrite for vanity URLs when file doesn't exist (try_files and rewrite together)

    - by user1721724
    I'm trying to get vanity URLs on my server. If the file path from the URL doesn't exist, I want to rewrite the URL to profile.php, but if my users have periods in their usernames, their vanity URL doesn't work. Here is my conf block. server { listen 80; server_name www.example.com; rewrite ^/([a-zA-Z0-9-_]+)$ /profile.php?url=$1 last; root /var/www/html/example.com; error_page 404 = /404.php; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; log_not_found off; } location ~ \.php$ { fastcgi_pass example_fast_cgi; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/example.com$fastcgi_script_name; include fastcgi_params; } location / { index index.php index.html index.htm; } location ~ /\.ht { deny all; } location /404.php { internal; return 404; } } Any help would be appreciated. Thanks!

    Read the article

  • Facing application redirection issue on nginx+tomcat

    - by Sunny Thakur
    I am facing a strange issue on application which is deployed on tomcat and nginx is using in front of tomcat to access the application from browser. The issue is, i deployed the application on tomcat and now setup the virtual host on nginx under conf.d directory [File i created is virtual.conf] and below is the content i am using for the same. server { listen 81; server_name domain.com; error_log /var/log/nginx/domain-admin-error.log; location / { proxy_pass http://localhost:100; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } Now the issue is this when i am using rewrite ^(.*) http://$server_name$1 permanent; in server section and access the URL then this redirects to https://domain.com and i am able to log in to app and able to access the links also [I am not using ssl redirection in this host file and i don't know why this is happening] Now when i removed this from server section then i am able to access the application from :81 and able to logged into the application but when i click on any link in app this redirect me to the login page. I am not getting any logs in application logs as well as tomcat logs. Please help on this if this is a redirection issue of nginx. Thanks, Sunny

    Read the article

  • DNS issue on Fedora 12? wget wordpress.org fails where wget www.google.com works

    - by Tom Auger
    I'm administering a Fedora 12 box, but am quite new to networking specifics. Recently one of our WordPress apps hosted on our server has stopped being able to perform its auto-update or auto-download of plugins. Investigating further, I have tried the following: $ wget wordpress.org --2010-12-17 11:26:50-- http://wordpress.org/ Resolving wordpress.org... failed: Temporary failure in name resolution. wget: unable to resolve host address âwordpress.orgâ Whereas: $ wget www.google.com --2010-12-17 11:27:26-- http://www.google.com/ Resolving www.google.com... 74.125.226.82, 74.125.226.84, 74.125.226.80, ... Connecting to www.google.com|74.125.226.82|:80... connected. HTTP request sent, awaiting response... 302 Found Location: http://www.google.ca/ [following] --2010-12-17 11:27:26-- http://www.google.ca/ Resolving www.google.ca... 173.194.32.104 Connecting to www.google.ca|173.194.32.104|:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: âindex.html.4â [ <=> ] 9,079 --.-K/s in 0.02s 2010-12-17 11:27:26 (462 KB/s) - âindex.html.4â Interestingly: $ ping wordpress.org PING wordpress.org (72.233.56.138) 56(84) bytes of data. 64 bytes from wordpress.org (72.233.56.138): icmp_seq=1 ttl=50 time=81.5 ms 64 bytes from wordpress.org (72.233.56.138): icmp_seq=2 ttl=50 time=67.3 ms ^C --- wordpress.org ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1783ms rtt min/avg/max/mdev = 67.361/74.448/81.536/7.092 ms and $ nslookup wordpress.org Server: 192.168.2.1 Address: 192.168.2.1#53 Non-authoritative answer: Name: wordpress.org Address: 72.233.56.138 Name: wordpress.org Address: 72.233.56.139 nscd has been stopped and flushed. iptables appear to be clean. At this point I have exhausted my limited abilities to diagnose the issue. Can anyone suggest a resolution path?

    Read the article

< Previous Page | 508 509 510 511 512 513 514 515 516 517 518 519  | Next Page >