Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 526/2227 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How can use mod_rewrite to redirect a multiple specific URLs containing multiple query strings?

    - by Derek
    Hi there folks, we recently migrated a site from a custom CMS to drupal. In an effort to preserve some links that our users bookmarked (we have about 120 redirects) we would like to forward the original URLs to a new URL. I have been searching for a couple days, but can't seem to find anything simple to what I need. We have existing URLS that contain one or more query strings, for example: /article.php?issue_id=12&article_id=275 and we would like to forward to the new location: http://foobar.edu/content/super-happy-fun-article I started using: RewriteEngine On RewriteRule ^/article\.php?issue_id=12&article_id=275$ http://foobar.edu/content/super-happy-fun-article [R=301,L] This, however, does not work. A simple RewriteRule works: RewriteRule ^test\.php$ index.php It is unclear to me how I need to use {QUERY_STRING} with multiple Basically it's 120 simple redirects that go from one existing URL to a new one. I don't need ranges [0-9], because there is no sequential order to existing URLs. Perhaps I can do what I need with RewriteMap and a simple text file that contains a line like this: index.php?issue_id=12&articleType_section=0&articleType_id=65 http://foobar.edu/category/fall-2008 If anyone has any idea on using mod_rewrite to accomplish this or if there is a better, or more simple mod, I am open to that as well. Thanks!

    Read the article

  • Apache /server-status/ gives a 404 not found

    - by user57069
    I am trying to solve a problem where Apache stats aren't displaying correctly in Munin. I've ran through quite a bit of checks and tests regarding Munin setup, but I think my issue is related to Apache, but my skill set there is lacking. first, system info: monitored server CentOS 5.3 kernel 2.6.18-128.1.1.el5 Apache/2.2.3 "server-status" directive in httpd.conf (i've cross-compared this with another system that i did a successful parallel install of Munin on, correctly showing Apache stats, and the directive below is the same for both) ExtendedStatus On <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ran lynx http://localhost/server-status got HTTP/1.1 404 taking a look at Apache access_log: 127.0.0.1 - - [13/Oct/2010:07:00:47 -0700] "GET /server-status HTTP/1.0" 404 11237 "-" "Lynx/2.8.5rel.1 libwww-FM/2.14 SSL-MM/1.4.1 OpenSSL/0.9.8e-fips-rhel5" mod_status is also loaded: % grep "mod_status" /etc/httpd/conf/httpd.conf LoadModule status_module modules/mod_status.so iptables is turned off also i did notice that the ownership status on httpd.conf on this system is root.root.. whereas the system that is displaying correctly is apache.www -- not certain that this matters?? its got to be permission issue, but i'm not certain where the permissions are messed up. any thoughts on why the test of server-status is giving me a 404?

    Read the article

  • glassfish timeout

    - by Stefano
    Environment: Windows 2008 Server Edition Netbeans 6.7.1 Glassfish 2.1 Apache 2.2.15 for win32 Original problem (almost fixed): The HTTP/1.1 GET method to send data fails if I wait for more than 30 seconds. What I did: I added to the http.conf file of Apache these lines: # # Timeout: The number of seconds before receives and sends time out. # Timeout 9000 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On I went to the Glassfish panel (localhost:4848) and in Configuration HTTP services and I put: Timeout request: 9000 seconds (it was 30) Standby time: -1 (it was 30 seconds) Problem: I am not able to put for glassfish a timeout bigger than 2 minutes to send a GET method. I found this article about glassfish settings, but i'm not able to find WHERE I should put those parameters, and if they could work. Can anybody help try to set this timeout to a higher limit? Maybe it's even a different setting? New tried solution: I went to the glassfish panel control, and to Configuration Subprocesses "Thread-pool-name" and changed the idle timeout from 120 seconds to 1200 seconds. Then I restarted the glassfish service (both from the administrative tools and from asadmin), but still it waits 120 seconds to go idle. I even tried restarting the whole server, still no results. Maybe some setting in postgres? Or the connection of netbeans to postgres through glassfish?

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Why is only one wget command working in my crontab?

    - by afEkenholm
    I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log 10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv" 11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander

    Read the article

  • NGINX returning 404 error on a valid url

    - by Harrison
    We have a site that runs PHP-FPM and NGINX. The application sends invitations to site members that are keyed with 40 character random strings (alphanumerics only -- example below). Today for the first time we ran into an issue with this approach. The following url: http://oursite.com/notices/response/approve/1960/OzH0pedV3rJhefFlMezDuoOQSomlUVdhJUliAhjS is returning a 404 error. This url format has been working for 6 months now without an issue, and other urls following this exact format continue to resolve properly. We have a very basic config with a simple redirect to a front controller, and everything else has been running fine for a while now. Also, if we change the last character from an "S" to anything other than a lower-case "s", no 404 error and the site handles the request properly, so I'm wondering if there's some security module that might see something wrong with this specific string... Not sure if that makes any sense. We are not sure where to look to find out what specifically is causing the issue, so any direction would be greatly appreciated. Thanks! Update: Adding a slash to the end of the url allowed it to be handled properly... Would still like to get to the bottom of the issue though. Solved: The problem was caused by part of my configuration... Realized I should have posted, but was headed out of town and didn't have a chance. Any url that ended in say "css" or "js" and not necessarily preceded by a dot (so, for example, http://site.com/response/somerandomestringcss ) was interpreted as a request for a file and the request was not routed through the front controller. The problem was my regex for disabling logging and setting expiration headers on jpgs, gifs, icos, etc. I replaced this: location ~* ^.+(jpg|jpeg|gif|css|png|js|ico)$ { with this: location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { And now urls ending in css, js, png, etc, are properly routed through the front controller. Hopefully that helps someone else out.

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • Bizarre client IP switch-up on VPN

    - by B. VB.
    Let A.B.C.D be the public IP of my VPN server. Let W.X.Y.Z be the IP of the client before it connects to the VPN. My VPN server's IP address on the LAN in 10.8.0.1, and the client is 10.8.0.6. I also run a webserver on the same machine hosting the VPN. On it is a simple webpage that performs the exact same thing as whatismyip.org (i.e., simply prints the IP of the requester) Let me illustrate the scenario for you. In a Chrome window I have three tabs, what I have in parenthesis is the URL: Tab 1 (http://whatismyip.org): A.B.C.D This is what I expect to see. It's the public IP of the VPN server. Tab 2 (http://10.8.0.1): 10.8.0.6 ok, looks expected. They are behind the same LAN now. Tab 3 (http://A.B.C.D) W.X.Y.Z WTF?? Basically, if I access the webserver while tunneled, in shows the IP address of my machine PRIOR to tunelling! Remember, tab2 and tab3 are the same webpage. Why does Tab3 not show the client IP as it's own IP (i.e., show A.B.C.D)??? I hope this question is clear, thanks in advance!

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • empty file from tomcat https redirect?

    - by amertune
    I am using tomcat 6.0.20, with jdk 1.6.0_18 on 64 bit linux (tomcat downloaded from tomcat.apache.org, not installed from repositories). I have iptables redirects from port 80 - 8080 and 443 - 8443. In server.xml, the connector for port 8080 redirects to 443, and the 8443 connector has proxyPort="443". In conf/web.xml, I have added this bit at the end of the file (but still inside the <web-app</webapp tags). <security-constraint> <web-resource-collection> <web-resource-name>Protected Context</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> I also have two contexts, ROOT and mywebapp. If I go to http://myurl.com, then I get redirected to https://myurl.com. If I go to http://myurl.com/mywebapp/, then I get redirected to https://myurl.com/mywebapp/. The problem I'm having is when I go to http://myurl.com/mywebapp (no trailing slash). When I do this I get a prompt to download an empty file that has an empty name. Going to https://myurl.com/mywebapp works. I would think that a user typing myurl.com/mywebapp is far from rare. Is there something I'm missing?

    Read the article

  • What is the name of the this DOS font? Where and how to add it? Why is there a 1 pixel gap?

    - by JBeurer
    So basically I somehow stepped into this webpage: www.braindamage.vg And the first thing that hit me hard was the lovely DOS fonts, so naturally I wanted to get them into my IDE badly. Opened the html source file and CSS file to find the font name: @font-face { font-family: 'Perfect DOS VGA 437'; src: url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.eot'); } @font-face { font-family: 'Perfect DOS VGA 437'; src: url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.svg#dos') format("svg"), url('http://www.braindamage.vg/wp-content/themes/braindamage/dosfont.ttf') format ('truetype'); } So I download the font, add it using Control Panel - Fonts. But once I start using it (notepad, MSVS 2008 & MSVS2010) I notice that it looks slightly off: It seems like there's 1 extra pixel between each character. How it should look: What is causing it and how to fix this? Is it the windows XP? (i have disabled font smoothing) Or is there something wrong with the font file?

    Read the article

  • Secure NAT setup with iptables

    - by TheBigB
    I have Debian running device that needs to act as an internet-gateway. On top of that I want to provide a firewall that not only blocks inbound traffic, but also outbound traffic. And I figured iptables should be able to do the job. The problem: I've configured NAT properly (I think?), but once I set the default policy to DROP and add rules to for instance allow HTTP traffic from inside the LAN, HTTP is not going through. So basically my rules don't seem to work. Below is the initialization script that I use for iptables. The device has two NICs, respectively eth0 (the WAN interface) and eth1 (the LAN interface). echo 1 > /proc/sys/net/ipv4/ip_forward # Flush tables iptables -F iptables -t nat -F # Set policies iptables -P INPUT DROP iptables -P OUTPUT DROP # NAT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT # Allow outbound HTTP from LAN? iptables -A FORWARD -i eth1 -o eth0 -p tcp --dport 80 -j ACCEPT iptables -A OUTPUT -p tcp --dport 80 -j ACCEPT Can anyone shed some light on this?

    Read the article

  • Cannot install SQL Server CE 4

    - by Manos Dilaverakis
    I'm trying to install SQL Server CE 4 on a WinXP Pro SP3 machine. I double-click on the file and absolutely nothing happens. There is nothing in the event viewer and the only effect I can see is the addition of an empty, randomly named folder in C:\ which looks something like C:\7c59aaeb5e43f6bdcb2430e923 I've tried this with both SQL Server CE 4 and the SP1 version. I've tried disabling the AV (Nod32) file protection but it didn't make a difference. I've checked the installed program list in case it's already installed, but I don't see it anywhere. I checked in C:\Program Files\Microsoft SQL Server Compact Edition\ and there's only the \3.5 folder in there from the already installed 3.5 version. Does anyone know what's going on or how I can further diagnose the problem? Edit in response to Ramhound: I have .NET 4 installed. Why, does it need a particular version? Edit in response to leinad13 I tried Process Explorer and filtered by the name of the temporary folder created. I see the following, but can't make much sense of it.

    Read the article

  • hi, beginner issue with socat [closed]

    - by ams
    the main question is how to begin sending request to server with some data(send request with number) and get data from server?? and second question is how can i solve this simple question (author said) ? In this part, you should write a simple Shell script which receives URL of a website by sending your student number to a server and after creating and sending HTTP request for this URL, receives the desired content. Finally the content should be saved in an HTML file. Steps 1. Connect to port 4000 of the server and send the massage which includes your student number (e.g. 89207704) to the server. Receive the URL in the form of http://www.example.com. Create the HTTP request and send it to the website's server. Receive the content of the URL from the website. Save the content in the HTML file. what i can do? how i begin? thank u all the topology that exercise is speaking about is here Topology is here Is there any easy way to do this?

    Read the article

  • Computer Won't Boot Properly, unless in safe mode?

    - by Mr_CryptoPrime
    I bought a computer today and booted it up, but when I did I only got a blank screen. I checked to make sure it wasn't the monitor by connecting it to my old computer...it worked. I then tried connecting my monitor to both DVI ports and found that the bottom one did work. However, now it just boots up and says "loading windows" and then when the login screen is suppose to come up the screen just goes blank and monitor says "no input, check cord" (or something like that). I tried reinstalling windows and then I was able to log on normally. I used the CD's and reinstalled all the drivers, then rebooted...now I am stuck right back where I started. I tried taking out the RAM and inserting into different slots, that didn't fix anything. I was able to boot up into windows using safe-mode. I suspected that my ATI Radeon 6950 was the issue and downloaded the drivers, but I can't install them on safe-mode. Someone said to install C++ distr. and I tried doing that to fix driver installation problem of "failed to load detection driver" but it wouldn't let me do that either. Please someone help me, I don't want to have to deal with the evil redtape of sending it back to get a replacement! My computer: -Content--text-_-"http://www.newegg.com/Product/Product.aspx?Item=N82E16883229236&nm_mc=TEMC-RMA-Approvel&cm_mmc=TEMC-RMA-Approvel--Content--text-_- Driver detection problem: http://www.hardwareheaven.com/hardwareheaven-tools-discussion/174912-failed-load-detection-driver-installation-error.html Driver download page: http://sites.amd.com/us/game/downloads/Pages/radeon_win7-64.aspx#1 I am using windows 7. Thanks again.

    Read the article

  • How to make my Ubuntu an internet gateway for my Android phone

    - by yacine
    I want to use the internet of my school on my Android, the problem is they have a Squid proxy, and many applications on my phone don't use the proxy at all. The obvious solution is to install a transparent proxy on my Android to force all applications to connect through it. The problem is that I need to root the phone to make it work, and I don't want to do it because it's not really my phone and rooting is a little risky- Another solution, which is safer, is to make my computer run as a gateway, so I put my Ubuntu IP in the gateway parameter of the phone. I'm running a small proxy on my ubuntu (cntlm), so I redirect the Android traffic to it. I did it with "iptables" as follows: iptables -t nat -A PREROUTING -s 10.0.1.118 -p tcp -j REDIRECT --to-ports 8888 iptables -t nat -A PREROUTING -s 10.0.1.118 -p udp -j REDIRECT --to-ports 8888 10.0.1.118 is the IP of the phone, 8888 is the port of cntlm (proxy on my PC). Now, on the phone: When I enter www.google.com on the navigator I get nothing (web site not found, error message of Firefox). But, when I enter http://74.125.143.101 (IP of Google) I get an error message from the school proxy (so it worked in some way – my PC redirected the traffic of the phone to the Squid proxy). The error message is : The requested URL could not be retrieved while trying to process the request get / http/1.1 host 74.125.143.101 user-Agent ... ... I think the problem is in the "GET" header,it should be GET 74.125.143.101 HTTP/1.1. But I don't understand what's happening, and I'm a certified CCNA.

    Read the article

  • server_name seems to be ignored in nginx

    - by user46171
    I have two domains set up in nginx.conf. Both are using SSL with their own certificates, and proxy to Apache. However the second domain is completely ignored, and nginx always resolves to the first domain. I can't see what in the issue is with this configuration, having set the server_name in each case correctly (as far as I can see): http { include mime.types; default_type application/octet-stream; keepalive_timeout 65; upstream site { # real IP addresses masked server xx.xxx.x.xxx; server xx.xxx.x.xxx; } server { # this domain always works listen 443; server_name *.first-site.com; ssl on; ssl_certificate /var/ssl/first-site.crt; ssl_certificate_key /var/ssl/first-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } server { # this domain is ignored, always resolves to first-site.com listen 443; server_name *.second-site.com; ssl on; ssl_certificate /var/ssl/second-site.crt; ssl_certificate_key /var/ssl/second-site.key; location / { access_log off; proxy_connect_timeout 15; proxy_next_upstream error; proxy_pass http://site; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } }

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Notepad++ incorrect syntax highlithing?

    - by user360919
    So I want to build a XHTML 1.0 Strict based website. Using Notepad++ for syntax highlighting came as an idea to me. But when I tried to put the XML declaration (as stated in the spec, proper XHTML pages should use a XML declaration and be served as application/xhtml+xml) I can't get the entire document highlighted propperly. Here is the code I used for a basic page: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us" lang="en-us"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <title>Page</title> <script type="application/javascript"> alert("A perfectly valid xHTML page..."); </script> <style type="text/css"> #test { text-align: center; } </style> </head> <body> <h1 id="test">TEST</h1> </body> </html> Paste this in Notepad++ and you'll see that it won't highlight the code between <script type="application/javascript"> and </script> (it renders its background white) if language is set to XML. If I set the language to HTML, then the script gets correctly highlighted but the XML declaration is not. What to do? How to make a hybrid language - combination of XML and HTML?

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • Port 80 not accessible Amazon ec2

    - by Jasper
    I have started a Amazon EC2 instance (Linux Redhat)... And Apache as well. But when i try: http://MyPublicHostName I get no response. I have ensured that my Security Group allows access to port 80. I can reach port 22 for sure, as i am logged into the instance via ssh. Within the Amazon EC2 Linux Instance when i do: $ wget http://localhost i do get a response. This confirms Apache and port 80 is indeed running fine. Since Amazon starts instances in VPC, do i have to do anything there... Infact i cannot even ping the instance, although i can ssh to it! Any advice? EDIT: Note that i had edited /etc/hosts file earlier to make 389-ds (ldap) installation work. My /etc/hosts file looks like this(IP addresses as shown as w.x.y.z ) 127.0.0.1   localhost.localdomain localhost w.x.y.z   ip-w-x-y-z.us-west-1.compute.internal w.x.y.z   ip-w-x-y-z.localdomain

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >