Search Results

Search found 57810 results on 2313 pages for 'http delete'.

Page 318/2313 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Import Firefox passwords into KeePassX or KeePass2

    - by rubo77
    I have an XML export of my Firefox Passwords in the form (I replaced real passwords with *): <xml> <entries ext="Password Exporter" extxmlversion="1.1" type="saved" encrypt="false"> <entry host="chrome://weave" user="****" password="****" formSubmitURL="" httpRealm="Mozilla Services Password" userFieldName="" passFieldName=""/> <entry host="chrome://weave" user="****" password="****" formSubmitURL="" httpRealm="Mozilla Services Encryption Passphrase" userFieldName="" passFieldName=""/> <entry host="http://www.example.de" user="rubo77" password="****" formSubmitURL="http://www.example.de" httpRealm="" userFieldName="benutzername" passFieldName="passwort"/> <entry host="http://example2.de" user="qqq" password="pppp" formSubmitURL="http://example2.de" httpRealm="" userFieldName="username" passFieldName="pass"/> ... Can I somehow convert this into a form KeePassX understands?

    Read the article

  • Apache Probes -- what are they after?

    - by Chris_K
    The past few weeks I've been seeing more and more of these probes each day. I'd like to figure out what vulnerability they're looking for but haven't been able to turn anything up with a web search. Here's a sample of what I get in my morning Logwatch emails: A total of XX possible successful probes were detected (the following URLs contain strings that match one or more of a listing of strings that indicate a possible exploit): /MyBlog/?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 /?option=com_myblog&Itemid=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 301 /index2.php?option=com_myblog&item=12&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 //index2.php?option=com_myblog&Itemid=1&task=../../../../../../../../../../../../../../../proc/self/environ%00 HTTP Response 200 This is coming from a current CentOS 5.4 / Apache 2 box with all updates. I've manually tried entering a few in to see what they get, but those all appear to just return the site's home page. This server is just hosting a few Joomla! sites... but this doesn't seem to be targeting Joomla (as far as I can tell). Anyone know what they're probing for? I just want to make sure whatever it is I've got it covered (or not installed). The escalation of these entries has me a bit concerned.

    Read the article

  • Bad request - Invalid Hostname Error when using ARR IP address

    - by syloc
    I'm trying to setup a simple ARR system. I have 1 ARR machine load balancing between 2 APP servers. I can reach the app sites if i use the server name of the ARR machine. (http://arrserver/app) But i can't do it with its IP address. (http://10.7.10.25/app). It gives the "Bad Request - Invalid Hostname". In the ARR machine i configured the default site's bindings to "All Unassigned","80" (default values). Do i need to change the binding rule or need additional url rewrite rules? And also, in the ARR server http://127.0.0.1/app doesn't work. But http://localhost/app works fine. Thx in advance

    Read the article

  • WordPress - Open Link in New Windows Default

    - by ninjaboi21
    Hey SuperUsers, is it possible to make some sort of change or add a plugin that allows me to make every 'external' link open in a new window? Just an example: if my blog was called http//timmy.com/ and I wanted to link to http//tom.com/, it would open a new window instead of the same window, but I wanted to link from http//timmy.com/ to http//timmy.com/somewhere/ in the same window. So I want all external link from my website to another in a new window/tab, but I want every internal link, from my website to somewhere else on my website, to be in the same window/tab. Is this possible?

    Read the article

  • Django HttpResponseRedirect acting as proxy rather than 302

    - by Trevor Burnham
    I have a Django method that's returning return HttpResponseRedirect("/redirect-target") When running the server locally, if I visit the page that returns that redirect, I get the log output [17/Oct/2013 15:26:02] "GET /redirecter HTTP/1.1" 302 0 [17/Oct/2013 15:26:02] "GET /redirect-target HTTP/1.1" 404 0 as expected. But, when I visit that page in Chrome, the Network tab shows the request to /redirecter with the response from /redirect-target, rather than showing the 302. cURL does the same: $ curl -I -X GET http://localhost/redirecter HTTP/1.1 404 Not Found date: Thu, 17 Oct 2013 19:32:30 GMT connection: keep-alive transfer-encoding: chunked In production, the same Django code does show a 302 redirect in Chrome and cURL. What could be going on here? Is there some kind of Django setting that might be causing it to proxy the target rather than send a redirect when HttpResponseRedirect is used (but lie about it in the log)? Or is there a quirk on my system (OS X) that might cause localhost redirects to behave this way?

    Read the article

  • Apache on Mac Mavericks issue

    - by Michael
    Trying to run Apache so that I can create a testing server on my mac.When I start apache it starts, but it doesn't run (no connection to local host. Ill upload the unix,you'll see that after starting there is no processes, and I did a check to show you what was running on my port 80... I don't entirely know that means. Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start Michaels-MacBook-Pro-3:~ michaelramos$ ps aux | grep httpd michaelramos 348 0.0 0.0 2442000 624 s000 S+ 8:51AM 0:00.00 grep httpd Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start org.apache.httpd: Already loaded Michaels-MacBook-Pro-3:~ michaelramos$ sudo lsof -i ':80' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ocspd 96 root 18u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 20u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 21u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED) ocspd 96 root 23u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED)

    Read the article

  • Nginx + Wordpress Multisite 3.4.2 + subdirectories + static pages and permalinks

    - by UrkoM
    I am trying to setup Wordpress Multisite, using subdirectories, with Nginx, php5-fpm, APC, and Batcache. As many other people, I am getting stuck in the rewrite rules for permalinks. I have followed these two guides, which seem to be as official as you can get: http://evansolomon.me/notes/faster-wordpress-multisite-nginx-batcache/ http://codex.wordpress.org/Nginx#WordPress_Multisite_Subdirectory_rules It is partially working: http://blog.ssis.edu.vn works. http://blog.ssis.edu.vn/umasse/ works. But other permalinks, like these two to a post or to a static page, don't work: http://blog.ssis.edu.vn/umasse/2008/12/12/hello-world-2/ http://blog.ssis.edu.vn/umasse/sample-page/ They either take you to a 404 error, or to some other blog! Here is my configuration: server { listen 80 default_server; server_name blog.ssis.edu.vn; root /var/www; access_log /var/log/nginx/blog-access.log; error_log /var/log/nginx/blog-error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Add trailing slash to */username requests rewrite ^/[_0-9a-zA-Z-]+$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { # Forbid PHP on upload dirs if ($uri ~ "uploads") { return 403; } client_max_body_size 25M; try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Any ideas are welcome! Have I done something wrong? I have disabled Batcache to see if it makes any difference, but still no go.

    Read the article

  • Store profile image of all users into single directory or per subdirectory id?

    - by Luccas
    I'm using amazon s3 as storage for users profile pic. I see that many websites generates large random filenames and put them into the same root directory like: http://xxx.us-east-1.amazonaws.com/aHR0cHM6Ly9mYmNkbi1wcm9maWxlLWEuYWthbWFpaGQubmV0L2hwcm9maWxlLWFrLWFzaDIvMjczMzkxXzEwMDAwMDMxMjAxMzg5OV81NTk3MjM4Mzdfbi5qcGc.jpg And my question is: What are the pros and cons of that approach? If I palce them into different directories, what problems I will have in future? http://xxx.us-east-1.amazonaws.com/users/id/username.jpg or http://xxx.us-east-1.amazonaws.com/users/id/random_number.jpg Thanks!

    Read the article

  • setting up a sub domain on windows hosting

    - by jason
    I'm trying to set up a sub domain for development on a windows server and am having problems setting the correct details in the httpd.ini file and hoped someone could help. I have set up the subdomain http://dev.website.com The files that I want to use for this subdomain are on the server in a folder called development http://www.website.com/development in the directory structure they are in /htdocs/development What do I need to add the the httpd.ini file to point the http://dev.website.com to the files located in the /htdocs/development folder on the server?

    Read the article

  • 403 forbidden error from cron

    - by user112570
    I have some php code that runs fine in a browser but now I want to use the same code and execute it from a cron script I'm getting issues. i tried the command on cron wget -O /dev/null http://www.mydomain.com/test.php but if i try that in the terminal i get the error below. What is the correct command to run a php file from cron? and do I need to add extra line of code to the top of my php file? The problem I'm getting is -bash-3.2$ wget -O /dev/null http://www.mydomain.com/test.php --2012-04-08 15:59:41-- http://www.mydomain.com/test.php Resolving www.mydomain.com... 46.***.***.1 Connecting to www.mydomain.com|46.***.***.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2012-04-08 15:59:41 ERROR 403: Forbidden. I gave the file 755 permissions and even 777 permissions, but can't see what I'm doing wrong.

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • Get Jira to run on shared Windows server on port 80

    - by codeulike
    I know this can be done on Linux with JIRA, using mod_proxy, but I'm not sure if its possible on Windows: Say we have a Windows server running IIS 7.0 and serving up pages on port 80, via an address like: http://twiddle.something.com We then install JIRA on the server, it uses its bundles Apache web server to serve stuff up on port 8080, like this: http://twiddle.something.com:8080 Is there a way to configure IIS and Apache so that JIRA runs off a port 80 folder, as in: http://twiddle.something.com still hits IIS http://twiddle.something.com/Jira hits JIRA on Apache? Thanks edit: I guess we might also want to throw SSL into the mix for JIRA too....

    Read the article

  • How to download a url as a file?

    - by Michelle
    A website url has "hidden" some mp3 files by embedding them as shockwave files, as follows: <span class="caption"><!-- Odeo player --><embed src="http://odeo.com/flash/audio_player_tiny_gray.swf"quality="high" name="audio_player_tiny_gray" align="middle" allowScriptAccess="always" wmode="transparent" type="application/x-shockwave-flash" flashvars="valid_sample_rate=true external_url=http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3" pluginspage="http://www.macromedia.com/go/getflashplayer"></embed></span> How can I download the files for off-line listening? I've found two methods: 1. The StackOverflow Method Create a new local html file with just the links eg <a href="http://podcast.cbc.ca/mp3/sundayeditionstream_20081125_9524.mp3">Sunday Edition 25Nov2008</a> Open the file in the browser, right click the link and File Save Link As. 2. The SuperUser Method Install the Firefox addin Iget. (Be sure to use the right version for your Firefox version.) Tools Downloads Enter url in field. Are there any other ways?

    Read the article

  • Reverse proxy for a subdirectory in nginx

    - by Maple
    I want to set up a Reverse proxy on my VPS for my Heroku app (http://lovemaple.heroku.com) So if I visit mysite.com/blog I can get the content in http://lovemaple.heroku.com I followed the instructions on the Apache wiki. location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I changed it to fit my situation: location /blog { rewrite /blog/(.*) /$1 break; proxy_pass http://lovemaple.heroku.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I visit mysite.com/blog, the page show up, but js/css file cannot be gotten (404). Their link becomes mysite.com/style.css but not mysite.com/blog/style.css. What's wrong and how can I fix it?

    Read the article

  • ssl_error_log apache issue

    - by lakshmipathi
    https://localhost works but https://ipaddress didn't cat logs/ssl_error_log [Mon Aug 02 19:04:11 2010] [error] [client 192.168.1.158] (13)Permission denied: access to /ajaxterm denied [root@space httpd]# cat logs/ssl_access_log 192.168.1.158 - - [02/Aug/2010:19:04:11 +0530] "GET /ajaxterm HTTP/1.1" 403 290 [root@space httpd]# cat logs/ssl_request_log [02/Aug/2010:19:04:11 +0530] 192.168.1.158 SSLv3 DHE-RSA-CAMELLIA256-SHA "GET /ajaxterm HTTP/1.1" 290 httpd.conf file NameVirtualHost *:443 <VirtualHost *:443> ServerName localhost SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /usr/share/ajaxterm > Options FollowSymLinks AllowOverride None Order deny,allow Allow from All </Directory> DocumentRoot /usr/share/ajaxterm DirectoryIndex ajaxterm.html ProxyRequests Off <Proxy *> # Order deny,allow Allow from all </Proxy> ProxyPass /ajaxterm/ http://localhost:8022/ ProxyPassReverse /ajaxterm/ http://localhost:8022/ ErrorLog error_log.log TransferLog access_log.log </VirtualHost> How to fix this ?

    Read the article

  • mod_rewrite not working for subdomain in Apache2

    - by Matt
    Hi, I'm having some trouble with mod_rewrite. So I'm implementing it through .htaccess, and I can get it working on my main vhost, domain.com - what I want it to do is rewrite http:// domain.com to force it to https:// domain.com, which it does well. I want to have name-based vhosts for the one IP with the following redirects: (I'm breaking up domain names with a space because otherwise serverfault recognises them as links) http:// domain.com -- https:// domain.com http:// staging.domain.com -- https:// staging.domain.com http:// test.domain.com -- https:// test.domain.com http:// beta.domain.com -- https:// beta.domain.com domain.com redirects to https:// domain.com, but staging.domain.com doesn't, although I can access https:// staging.domain.com. The .htaccess is identical for both, just with the domain name different. It doesn't seem to do any rewriting at all for staging.domain.com, I've tested this by trying to get it to rewrite to www.google.com. I have a wildcard DNS record, *.domain.com which points to the domain IP. Is there a particular way I should have the virtualhosts configured to allow this? I keep reading in the Apache documentation that it doesn't support multiple SSL name-based vhosts. But I can access both https:// domain.com and https:// staging.domain.com just fine. Any thoughts? Thanks to everyone for your help with this.

    Read the article

  • Performance issues with new dedicated server [closed]

    - by Pierre Espenan
    I have just subscribed to a new dedicated server and am getting worst than expected PHP execution performance. Execution times are twice as high as on my old mutualized server! I'm definitely not an expert at server management, so I'm wondering what I missed. Here are some stuff that can help you understand what's wrong here : My server (in french but easy to understand) : http://www.online.net/fr/serveur-dedie/dedibox-sc phpinfo(); output : http://jsfiddle.net/E8b7W/embedded/result/ PHP bench script (dedicated server) : http://jsfiddle.net/EhXzK/embedded/result/ PHP bench script (old mutualized) : http://jsfiddle.net/ANbWt/embedded/result/ Is it normal to get such poor performances after a kernel update and basics "apt-get install" for apache2 and php ? Thanks !

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Why is Google Chrome making these weird requests?

    - by mackenir
    I noticed that now and again, Chrome will make three requests to weird hostnames that arent resolved. For example: HEAD http://gtblynlsos/ HTTP/1.1 Host: gtblynlsos Proxy-Connection: keep-alive Content-Length: 0 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.107 Safari/534.13 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-GB,en-US;q=0.8,en;q=0.6 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Other example weird hostnames: http://nxibbjklov/ http://moheksgryj/ Has anyone else seen this? Any ideas what's going on? I have all Chrome extensions disabled.

    Read the article

  • 404 when doing safe-upgrade in lucid 64 box?

    - by Millisami
    Why I see 404 when doing sudo aptitude safe-upgrade in my lucid 64 box? deploy@li167-251:~$ sudo aptitude safe-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-threaded-dev apache2-utils apache2.2-bin apache2.2-common apt apt-utils base-files binutils bzip2 dpkg dpkg-dev gzip ifupdown krb5-multidev language-pack-en language-pack-en-base language-selector-common libatk1.0-0 libatk1.0-dev libavahi-client3 libavahi-common-data libavahi-common3 libbz2-1.0 libc-bin libc-dev-bin libc6 libc6-dev libc6-i686 libcups2 libfreetype6 libfreetype6-dev libglib2.0-0 libglib2.0-dev libgssapi-krb5-2 libgssrpc4 libgtk2.0-0 libgtk2.0-common libgtk2.0-dev libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 libldap-2.4-2 libldap2-dev libmysqlclient-dev libmysqlclient16 libnotify-dev libnotify1 libpam-modules libpam-runtime libpam0g libparted0debian1 libpng12-0 libpng12-dev libpq-dev libpq5 libssl-dev libssl0.9.8 libtiff4 libudev0 libusb-0.1-4 linux-libc-dev mountall mysql-client mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openssh-client openssh-server openssl parted python-apt sudo tzdata udev upstart ureadahead wget xulrunner-1.9.2 xulrunner-1.9.2-dev The following packages are RECOMMENDED but will NOT be installed: colibri debhelper fakeroot hicolor-icon-theme libatk1.0-data libglib2.0-data libgtk2.0-bin libhtml-template-perl manpages-dev notification-daemon notify-osd ssl-cert xauth xfce4-notifyd 88 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 85.8MB of archives. After unpacking 1712kB will be used. Do you want to continue? [Y/n/?] y Writing extended state information... Done Get:1 http://security.ubuntu.com/ubuntu/ lucid-updates/main libpam-modules 1.1.1-2ubuntu5 [358kB] Get:2 http://security.ubuntu.com/ubuntu/ lucid-updates/main base-files 5.0.0ubuntu20.10.04.2 [70.2kB] Get:3 http://security.ubuntu.com/ubuntu/ lucid-updates/main gzip 1.3.12-9ubuntu1.1 [102kB] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc-bin 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6 2.11.1-0ubuntu7.2 404 Not Found [IP: 91.189.88.37 80] Err http://security.ubuntu.com/ubuntu/ lucid-updates/main libc6-i686 2.11.1-0ubuntu7.2 .........

    Read the article

  • Apache mod_proxy to another server

    - by trobrock
    I am using the proxy_balancer in Apache2 to proxy requests to a Rails application to my rails server on the port the application is running on. This is how its set up... Rails Server Mongrel running on port 8000, when accessing the url directly to http://rails_server:8000 the site loads fine Apache Server Conf file for the site: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName myserver.com ServerAlias application.myserver.com <Proxy balancer://application_cluster> Allow from localhost BalancerMember http://ip.to.server:8000 retry=10 </Proxy> ProxyPass / balancer://application_cluster </VirtualHost> The problem I am having is going to http://rails_server:8000 works fine, but going to http://application.myserver.com Loads the right content, but is displaying all the HTML as text and not rendering it as html

    Read the article

  • Nginx redirect one path to another

    - by SteveEdson
    I'm sure this has been asked before, but I can't find a solution that works. A website has switched CMS services, but has the same domain, how do I set up an nginx rewrite for a single page? E.g. Old Page http://sitedomain.co.uk/content/unique-page-name New page http://sitedomain.co.uk/new-name/unique-page-name Please note, I don't want everything within the content page to be redirected, but literally just the url mentioned above. I have about 9 redirects to set up, non of which fit in a pattern. Thanks! Edit: I found this solution, which seems to be working, except for the fact that it redirects without a slash: if ( $request_filename ~ content/unique-page-name/ ) { rewrite ^ http://sitedomain.co.uk/new-name/unique-page-name/? permanent; } But this redirects to: http://sitedomain.co.uknew-name/unique-page-name/

    Read the article

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >