Search Results

Search found 58094 results on 2324 pages for 'http status codes'.

Page 304/2324 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • Fedora 17 not saving iptables

    - by Louis W
    For some reason my Fedora is not saving changes made to my iptables. iptables -I INPUT -p tcp --dport 80 -j ACCEPT iptables -I INPUT -p tcp --dport 443 -j ACCEPT service iptables status service iptables restart Redirecting to /bin/systemctl status iptables.service Then when starting, my changes are not there anymore. Also tried saving: [root@VTM01 ~]# service iptables save Redirecting to /bin/systemctl save iptables.service Unknown operation save

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    i have exported a pair of cookies from firefox that are valid for the URL in question and tried accessing/downloading the protected content off that addr., but the end result is a return to the login page. i have tried doing about the same thing for 3 other websites with similiar outcome. any clues as to what might i be doing wrong? the syntax i'm using: wget --load--cokies=FILE URL DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org = 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: / Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • What is the best way to determine what's locking up SQL Server using a query?

    - by Aplato
    Recently our SQL Server is getting bogged down by something. I was wondering what is the best way to check what could be causing the problem by querying the database. This is the best I've found so far: SELECT SPID = s.spid , BlockingSPID = s.blocked , DatabaseName = DB_NAME(s.dbid) , ProgramName = s.program_name , [Status] = s.[status] , LoginName = s.loginame , ObjectName = OBJECT_NAME(objectid, s.dbid) , [Definition] = CAST([text] AS VARCHAR(MAX)) FROM sys.sysprocesses s CROSS APPLY sys.dm_exec_sql_text (sql_handle) WHERE s.spid > 50 ORDER BY DatabaseName , loginName

    Read the article

  • .htaccess, mod_rewrite Issue

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • htaccess for subdomain help

    - by Patrick
    Usually I just use the online tools for url mod_rewrite rules but this just wouldn't work. Dynamic url: http://sub.domain.com/index.php?page=index&name=test Rewritten url: http://sub.domain.com/test OR http://sub.domain.com/test/ My htaccess: RewriteRule ^([^/]+)/?$ index.php?page=index&name=$1 [L] Instead of passing "test" for the variable name, I always get the value "index.php" Anyone gurus has have any idea?

    Read the article

  • xampp - can access control panel, cannot access projects/sites on local network

    - by Peter O.
    I've configured xampp and firewall so I can access desktop pc's localhost over my local network through desktop pc's IP. But I'm not able to access auctual projects: I can access: http://192.168.x.x/xampp or http://192.168.x.x/phpMyAdmin But I cannot access: http://192.168.x.x/myWebsite/ I get an error: Server error We're sorry! The server encountered an internal error and was unable to complete your request. Please try again later. error 500

    Read the article

  • Unable to install vlc and mplayer after update on fedora 18

    - by mahesh
    I just updated fedora 18 using # yum update Then if I try # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm warning: /var/tmp/rpm-tmp.0K5pWw: Header V3 RSA/SHA256 Signature, key ID 172ff33d: NOKEY error: Failed dependencies: system-release >= 19 is needed by rpmfusion-free-release-19-1.noarch So I tried installing vlc from development version, # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm warning: /var/tmp/rpm-tmp.WZC0gw: Header V3 RSA/SHA256 Signature, key ID 6446d859: NOKEY error: Failed dependencies: system-release >= 21 is needed by rpmfusion-free-release-21-0.1.noarch There's no system release after 20. What does this mean?

    Read the article

  • I can't get my Macbook Pro to print to an IP addressable printer

    - by Pieter
    Running Macbook Pro OS X 10.6.3 Accessing an HP OfficeJet 5610 plugged in the USB port of a US Robotics router. I tried several combinations of: Protocol: Internet Printing Protocol (IPP) Line Printer Daemon (LPD) HP JetDirect (socket) Address: http://192.168.1.10:1631/printers/HP5610 192.168.1.10:1631/printers/HP5610 http://192.168.1.10:1631 192.168.1.10:1631 192.168.1.10 ... Driver: HP OfficeJet 5600 Series Whenever I try to print, it fails while saying "connected to printer" or "Printer is busy...zill try again in X seconds" Both Windows 7 and Windows XP computers on the network can successfully access this printer, identifying it as "HP5610 on http://192.168.1.10:1631/" I tried clearing all tasks and printers (ctrl-click in the menu), and resetting it to (socket, http://192.168.1.10:1631/printers/HP5610, HP5600 series) but to no success.

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Redirecting pages from the root folder to a subfolder

    - by MarcoPRT
    I have a Joomla site in the root directory of my domain, and I have a forum at /forum subdirectory. How can I redirect visitors from the main site to the forum, continuing to have the possibility to access the site from a link at the forum? Example: http://example.com redirected to http://example.com/forum , but I can access the main site by the link http://example.com/index.php

    Read the article

  • BSOD in nltdi.sys Vista x64

    - by W1N9Zr0
    BSOD caused by NetLimiter's nltdi.sys Error codes vary, but this is pretty constant: SYMBOL_NAME: tdx!TdxSendConnection+2a0 MODULE_NAME: tdx IMAGE_NAME: tdx.sys Stack trace includes: nltdi+0x1144 example

    Read the article

  • How can I use an own asp page as 404 error page on a Windows 7 pro IIS 7.5 local installation

    - by Edelcom
    Running Windows 7 Pro Running local IIS 7.5 ( for web development purposes ) Edited hosts file to be able to use http://www.sitestepper.dev Site build in subfolder of the inetpub/wwwroot/staplijst Site can be viewed using http://www.sitestepper.dev/staplijst/whatever-page.htm I have an http://www.sitestepper.dev/p.asp page I would like to call when an unknown page is requested. This technique works fine on the deployed version of this web site ( deployed on an Windows 2003 server running IIS 7 - not sure about the 7 but it is the version deployed with Windows 2003 server ). I tried ( in the Edit Custom Error Dialog ) : Execute a URL on this site: with value /staplijst/p.asp and Respond with a 302 redirect: with value http://www.sitestepper.dev/staplijst/p.asp I tried this in the properties of the 'Default Web Site' , and I tried this at the staplijst level. Even tried it with values without the /staplijst. I restarted the default web site after each change. And even stopped/started the Web service. But nothing seems to work, I keep getting the 'Server Error In Application "DEFAULT WEB SITE"' , HTTP Error 404.0 error. What am I missing here - probably something obvious, but I don't see it ? Thanks in advance

    Read the article

  • Websocket handshake response not forwarded from TCP to client

    - by Saharsh
    I am trying to create a websocket server. I can see the websocket client's opening handhshake. My response to it is received by the client laptop (I can see this on wireshark). So the TCP connection has been established. But the client (a chrome websocket client extension) does not receive the handshake packet. What could be a possible reason for TCP to not forward the handshake to the client or for the client to not be able to read the TCP message? Client handshake: GET HTTP/1.1 Upgrade: websocket Connection:Upgrade Cache-Control:no-cache Host:192.168.0.101 Origin:http://www.websocket.org Pragma:no-cache Sec-WebSocket-Extensions:permessage-deflate; client_max_window_bits, x-webkit-deflate-frame Sec-WebSocket-Key: qrmw/m+BoZije6h9HYKmVw== Sec-WebSocket-Version:13 Upgrade:websocket Server Response: HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: jj1g5Io57m9ks8cme3jkbyo2asc= Access-Control-Allow-Origin: http://www.websocket.org Server: xyz Sec-WebSocket-Extensions: Thanks!

    Read the article

  • uWSGI and python virtual env

    - by user27512
    I'm trying to use uWSGI with a virtual env in order to use the Trac bug tracker on it. I've installed system-wide uwsgi via pip. Next, I've installed trac in a virtualenv $ virtualenv venv $ . venv/bin/activate $ pip install trac I've then written a simple uWSGI configuration script: [uwsgi] master = true processes = 1 socket = localhost:3032 home = /srv/http/trac/venv/ no-site = true gid = www-data uid = www-data env = TRAC_ENV=/srv/http/trac/projects/my_project module = trac.web.main:dispatch_request But when I try to launch it, it fails: $ uwsgi --http :8000 --ini /etc/uwsgi/vassals-available/my_project.ini --gid www-data --uid www-data ... Set PythonHome to /srv/http/trac/venv/ ... *** Operational MODE: single process *** ImportError: No module named trac.web.main unable to load app 0 (mountpoint='') (callable not found or import error) I think uWSGI isn't using the virtual env. When inside the virtual env, I can import trac.web.main without having an ImportError. How can I do that ? Thanks

    Read the article

  • SSH not working through Double NAT

    - by d_inevitable
    I am trying to setup port forwarding for ssh through 2 NATs The first Router translates my internet IP to my outer network (10.1.7.0). In the outer network there's a second Router that does NAT to my inner network (192.168.1.0). The target server is connected to both, the outer network and the inner network. I cannot change the port forwarding options for outer router. It is currently configured to forward the SSH and HTTP port to the router for the inner network. Internet + | v +-----------------+ +------------------+ | Outer Router | | Inner Router | |-----------------| |------------------| | | SSH HTTP | | +----+ +--------------------->| | | | | | | | | | | | | +-------+---------+ +------+---------+-+ | | | | | | | | | | | | | | +------------------+ | SSH | | | | Server | | | | | |------------------| | | | +-----------> |<-------+ | | | | |HTTP (testing) | +------------------+ | | | +------v------------------+ | | Outer Workstation | +-------------------+ | |-------------------------| | Inner Workstation| | | | |-------------------| | | | | |<----------------+ +-------------------------+ | | +-------------------+ When connecting from a outer workstation to the address of the inner router, then both SSH and HTTP work fine. When connecting from the internet to my public ip with HTTP, the connection works fine as well. However SSH just times out. Most likely because the reply is not routed back properly. I suspect its either because of the SSH itself, or because the server is connected to both, the inner and outer network. Any ideas how I could resolve this issue? The routes on the server are currently: ip route show default via 10.1.7.254 dev eth0 metric 100 10.1.7.0/24 dev eth0 proto kernel scope link src 10.1.7.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.2 Do I have to change this? If so how?

    Read the article

  • ls hangs after NFS server reboot

    - by Apikot
    I've got server A and server B. B acts as an nfs server, A mounts from B. Both are running on EC2. Sometimes I have to shut down B and start a new instance (identical instance). After B is back up, trying to do anything inside the mounted directory on A (ls for example) just hangs. I'm trying to set up a cron that checks the status of the mount, and remounts if anything is wrong. Is there any way to check the status of a mount?

    Read the article

  • Amazon EC2 Elastic Load Balancing - strategy for zero downtime server restart

    - by Yoga
    I have 5 web servers (Apache/mod_perl) behind Amazon EC2 Elastic Load Balancing, when I deploy codes to the web servers, I am doing this.. For each machine, shutdown the Apache Update the code Start over the server and proceed to the next server I think when my server is shutdown, ELB will not distribute request to my server, but how about the request still serving? I think a better approach is Stop accepting new request from ELB Sleep for sometimes, shutdown web server only if all requests are responded Update the codes Start the server again But how to perform (1) and (2) from my local sever? Do I need to use AWS API? or other easy way to do it? Thanks.

    Read the article

  • Grizzly server - works with IP, but not with domain name

    - by Hitchhiker
    I'm hosting a grizzly web service on a Windows 7 Pro machine (embedded in a regular Java process), and it is binding to http://my-domain-name. When trying to hit the service from another machine, requests to http://my-domain-name fail (fiddler shows error code 502), but requests to http://my-ip work. When the service runs on a Windows Server 2008 machine, this doesn't happen (both requests succeed). What could be the issue?

    Read the article

  • Plesk SiteBuilder Install (UNIX)

    - by Clear.Cache
    Trying to install site builder from Plesk, for Unix. I have Centos 4x I installed fine using tarball files on their site. Ran: * rpm -Uhv updates/*.rpm * rpm -Uhv sitebuilder/*.rpm Documentation: http://download1.parallels.com/SiteBuilder/4.5.0/doc/install/en_US/html/index.htm Now, http://64.64.96.138/login Doesn't work. http://download1.parallels.com/SiteBuilder/4.5.0/doc/install/en_US/html/index.htm?fileName=performing_first_login_to_sitebuilder.htm What am I doing wrong here?

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >