Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 22/216 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Configure PHP and Apache in Windows 7

    - by manxing
    I installed apache server successfully in Window 7, 32bit system. It showed "It works" in the webpage. I also configured <?php phpinfo(); ?>file as info.php. But when I tried to open http://localhost/info.php in the browser, all I can get is exactly: <?php phpinfo(); ?>in plain text. I restarted Apache server everytime I made changes. Anyone can help with this? Many tnanks in advance!

    Read the article

  • Trying to Host Server for External Access - Apache, VirtualBox & Portforwarding

    - by Tspoon
    Banging my head on the wall at this stage.... trying to host my Apache site on Ubuntu 12.10 with VirtualBox. Running Windows 8 host. Things I've done: Ensured Apache is listening on ports 80, 443 and 8080 (for thoroughness) tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3355/httpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 681/sshd VM is using bridged network connection Assigned a static IP to my Ubuntu VM, which can be accessed fine from within network. Forwarded TCP ports 80, 8080, 443 on the static IP of VM on my router Given my VM a static NAT Address Turned off Ubuntu firewall and router firewall Read on forums that my ISP (Eircom) allow port 80 to be used And I still can't access my site using the WAN/External IP (checked internally and using CanYouSeeMe.org). It says all the ports I mentioned are closed. I'm really at a loss of what to try next... Am I missing something silly here? Note: I haven't assigned a static IP address within the router, on within the VM. And DHCP server is enabled. Is that bad?

    Read the article

  • setting up mod_proxy - plesk, apache, .htacess?

    - by sam
    I want to set up mod_proxy to work so that my blog is running under a subdirectory of my site rather than subdomain so i get the seo backlink benefit. What im looking to do is get my tumblr blog which is running at blog.mysite.com (which is in turn mapped from myblog.tumblr.com) will be running on mysite.com/blog How can i set up mod_proxy to do this, is it just something that i can setup from inside of my .htacess file ? Ive got my site hosted on an apache server, using plesk as a controll panel. I contacted my webhost and they told me mod_rewrite could acheve it, they gave me this but said they wont provide me further support regarding mod_rewrite as its somthing they dont support <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_HOST} ^example.co.uk/blog$ RewriteCond %{REQUEST_URI} !/standard RewriteRule ^(.*)$ http://example.tumblr.com$1 [R] </IfModule> ideally id like to use the mod_proxy method as it recomended from an seo point of view from this article http://www.seomoz.org/blog/what-is-a-reverse-proxy-and-how-can-it-help-my-seo

    Read the article

  • Is traditional JavaScript image pre-loading taboo

    - by Evan Plaice
    I remember the good-old-days (not really) back when I was still sucking the teet of Dreamweaver to build websites and the lure of playing copypasta with fancy built-in scripts (ex, image-swap) was like black magic. I'm pretty far removed from that now days but I was adapting a small site from it's original FrontPage (::cringe::) format to a standard HTML/CSS implementation and couldn't help wondering... should I should re-implement the JavaScript image pre-loading into the current version? Or, is there a better way? I don't want to block the page from loading by requiring the user to request all the assets withing the page by using the traditional JavaScript pre-loader method. I value giving the user something to look at ASAP, and there's some potential harm to my Google mojo by doing so. Is there a cleaner solution to prevent unnecessary page-reflows during loading? Such as, setting the static width/height dimensions through a CSS style attribute on the image element.

    Read the article

  • Thousands of 404 errors in Google Webmaster Tools

    - by atticae
    Because of a former error in our ASP.Net application, created by my predecessor and undiscovered for a long time, thousands of wrong URLs where created dynamically. The normal user did not notice it, but Google followed these links and crawled itself through these incorrect URLs, creating more and more wrong links. To make it clearer, consider the url example.com/folder should create the link example.com/folder/subfolder but was creating example.com/subfolder instead. Because of bad url rewriting, this was accepted and by default showed the index page for any unknown url, creating more and more links like this. example.com/subfolder/subfolder/.... The problem is resolved by now, but now I have thousands of 404 errors listed in the Google Webmaster Tools, which got discovered 1 or 2 years ago, and more keep coming up. Unfortunately the links do not follow a common pattern that I could deny for crawling in the robots.txt. Is there anything I can do to stop google from trying out those very old links and remove the already listed 404s from Webmaster Tools?

    Read the article

  • cURL works but PHP cURL fails to internet [migrated]

    - by wrk2bike
    Trying to diagnose an issue using PHP to cURL to an Internet location on a RedHat Linux server. cURL is installed and working, and: <?php var_dump(curl_version()); ?> shows all the correct information in the output. The issue is I can use PHP to cURL to localhost on the box itself, but not the Internet (see below). Normally I'd suspect the firewall, but I can cURL from the command line to the Internet without a problem. The box can also update it's own software packages, etc. What am I missing? My test is: <?php function http_head_curl($url,$timeout=30) { $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_TIMEOUT, $timeout); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_NOBODY, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $res = curl_exec($ch); if ($res === false) { throw new RuntimeException("cURL exception: ".curl_errno($ch).": ".curl_error($ch)); } return trim($res); } // Succeeds, displaying headers echo(http_head_curl('localhost')); // Fails: echo(http_head_curl('www.google.com')); ?>

    Read the article

  • Apache mod_rewrite for multiple domains to SSL

    - by Aaron Vegh
    Hi there, I'm running a web service that will allow people to create their own "instances" of my application, running under their own domain. These people will create an A record to forward a subdomain of their main domain to my server. The problem is that my server runs everything under SSL. So in my configuration for port 80, I have the following: <VirtualHost *:80> ServerName mydomain.com ServerAlias www.mydomain.com RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule /(.*) https://mydomain.com/$1 [R=301] </VirtualHost> This has worked well to forward all requests from the http: to https: domain. But as I said, I now need to let any domain automatically forward to the secure version of itself. Is there a rewrite rule that will let me take the incoming domain and rewrite it to the https version of same? So that the following matches would occur: http://some.otherdomain.com -> https://some.otherdomain.com http://evenanotherdomain.com -> https://evenanotherdomain.com Thanks for your help! Apache mod_rewrite makes my brain hurt. Aaron.

    Read the article

  • dns hosting - url forwarding - hiding forwarded url?

    - by jeremycollins
    I have free dns hosting with the domain registrar and I'd like the dns hosted domain www.example.com to display contents of www.myotherlongdomain.com. I only have 301/302/iframe forwarding options, however I want to mask the redirected (longdomain) url. If I use frames, users can view the source and see the (longdomain) url the contents are coming from. How can I hide it so it always displays www.example.com? There is no cloaking/masking option with the registrar. Thanks.

    Read the article

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • Apache returns 304, I want it to ignore anything from client and send the page

    - by Ayman
    I am using Apache HTTPD 2.2 on Windows. mod_expires is commented out. Most other stuff are not changed from the defaults. gzip is on. I made some changes to my .js files. My client gets one 304 response for one of the .js files and never gets the rest. How can I force Apache to sort of flush everything and send all new files to the client? The main html file includes these scripts in the head section of the main page: <script src="js/jquery-1.7.1.min.js" type="text/javascript"> </script> <script src="js/jquery-ui-1.8.17.custom.min.js" type="text/javascript"></script> <script src="js/trex.utils.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.core.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.codes.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.emv.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.b24xtokens.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.span2.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.amex.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.abi.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.barclays.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.bnet.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.visa.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.atm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.apacs.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.pstm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.stm.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.thales.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-saf.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.fps-iso.js" type="text/javascript" charset="utf-8"></script> <script src="js/trex.app.js" type="text/javascript" charset="utf-8"></script> Apache access log has the following: [07/Jul/2013:16:50:40 +0300] "GET /trex/index.html HTTP/1.1" 200 2033 "-" [07/Jul/2013:16:50:40 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 [08/Jul/2013:07:54:35 +0300] "GET /trex/index.html HTTP/1.1" 304 - "-" [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.iso.js HTTP/1.1" 200 12417 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.amex.js HTTP/1.1" 200 6683 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-saf.js HTTP/1.1" 200 2925 [08/Jul/2013:07:54:35 +0300] "GET /trex/js/trex.fps-iso.js HTTP/1.1" 304 Chrome request headers are as below: THis file is ok, latest: Request URL:http://localhost/trex/js/trex.iso.js Request Method:GET Status Code:200 OK (from cache) THis file is ok, latest: Request URL:http://localhost/trex/js/trex.amex.js Request Method:GET Status Code:200 OK (from cache) This one is also ok: Request URL:http://localhost/trex/js/trex.fps-iso.js Request Method:GET Status Code:200 OK (from cache) The rest of the scrips all have 200 OK (from cache).

    Read the article

  • Firefox does not print flash content

    - by Rochelle
    I am using Firefox 3.6.15 on a Windows 7 Professional, 64-bit Operating System, Intel Core i7 CPU, 3.33GHz, 10BG RAM, by Hewlett-Packard. Firefox does not print flash content, aka swf objects, nor does it show them in the print preview pane...I want to print out the entire web page with the flash content. I seem to only be able to see flash and html content together in print preview and to print in IE8. I have tried to google this issue, but could not find a solution. I was trying to print preview/print out the following site: http://www.discovertheponds.com/. Flash content will display in print preview and print in IE8, but neither print preview or print in Firefox. I have also updated the Java on my computer to the most recent update, and ran the firefox plug-in checker at http://www.mozilla.com/en-US/plugincheck/ . I do run Firebug and Web Developer, but have currently disabled them. Is this problem on my end, meaning some issue with my computer...or is this because of how the website was programmed in HTML/Flash...or is this a bug with Firefox? I am a website designer and am also concerned that others will not be able print sites I develop or have already developed that have flash content from Firefox. I used to think Firefox was better than IE at everything. What happened here? Was it some change in Firefox's version that caused this problem?

    Read the article

  • Ranking hit after site migration

    - by Ben
    I migrated my site from its old domain over a month ago. I followed Google Webmaster Tools completely, including 301 redirects from every existing URL to the new domain, and then submitting a change of address. Traffic continued as normal, but then a few days after submitting the change of address traffic plummeted to about 20-30% of what it was previously. Most of my traffic comes from organic search, and I can see that for the keywords I had targeted before and performed well with and am now ranking much much lower for. In some cases for low competition keywords I've only lost a few places, for higher competition terms I have really suffered. This has started to pick up a bit (one of my keywords I have risen from 195 to 100 in the last week), but it seems to be a very slow process. How seamless is this process normally? I was under the impression that this would not affect my rankings too severely, but it has now been a month since the move and recovery seems to be very slow, if at all. Is it likely that I've missed something? The only change is that I have moved what was the home page to be more of a sub-page, and now in its place is a magazine-style home page. I understand that links to the old site will now be pointing to the latter which means that rankings for some keywords attributed to the old home page will take a hit, but even on other pages that seem to fit in exactly the same page structure as the previous site I have seen a drop in rankings.

    Read the article

  • Problem with DNS

    - by dotNET
    Hey, I bought a new website, and the company gived me another free domain name, so when I asked for the socond they created it and they told me to change the DNS to look like the first one. It's been a week waiting for it to propagate, today when I type the url I got this error message : If you are the web site owner, it is possible you have reached this page because: * The IP address has changed. * There has been a server misconfiguration. * The site may have been moved to a different server. If you are the owner of this website and were not expecting to see this page, please contact your hosting provider. When I try to add the second domain to my cpanel (Addon domain) I get also another error : The addon domain “abcdef.com” has been created. An account with that login already exists. Do you have any ideas about this problem. Thanks. EDIT I tried to flush the DNS with ipconfig /flushdns, but It's not changing anything.

    Read the article

  • What is the best Apache logs Analyzer?

    - by Evgeny
    What real-time log analyzer can you suggest for Apache access and error logs? There is a list of web analytics software on wikipedia, but it would be great to hear opinions from people with experience without having to try all of them. Please don't suggest Google Analytics or any other hosted/javascript analytics suites, already using them, GA is not real-time and it is missing some data that the logs show. For example 404 errors, script errors, the full query-string of the referral, IP addresses, visitor path through the website, etc ...

    Read the article

  • Restrict SSL access for some paths on a apache2 server

    - by valmar
    I wanted to allow access to www.mydomain.com/login through ssl only. E.g.: Whenever someone accessed http://www.mydomain.com/login, I wanted him to be redirect to https://www.mydomain.com/login so it's impossible for him/her to access that site without SSL. I accomplished this by adding the following lines to the virtual host for www.mydomain.com on port 80 in /etc/apache2/sites-available/default: RewriteEngine on RewriteCond %{SERVER_PORT} ^80$ RewriteRule ^/login(.*)$ https://%{SERVER_NAME}/login$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" Now, I want to restrict using SSL for www.mydomain.com. That means, whenever someone accessed https://www.mydomain.com, I want him to be redirected to http://www.mydomain.com (for performance reasons). I tried this by adding the following lines to the virtual host of www.mydomain.com on port 443 in /etc/apache2/sites-available/default-ssl: RewriteEngine on RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^/(.*)$ http://%{SERVER_NAME}/$1 [L,R] RewriteLog "/var/log/apache2/rewrite.log" But when I now try to access www.mydomain.com/login, I get an error message that the server has caused to many redirects. That does make sense. Obviously, the two RewriteRules are playing ping-pong against each other. How could I work around this?

    Read the article

  • Terms and conditions for a commerce site

    - by Mantorok
    I am developing a website for my partner who is currently a sole trader, presently selling on ebay but we have opted to create our own site. I've noticed that there are many sites allowing you to purchase base-line T&Cs to be used on websites, I'm tempted to give these a go but I've heard nothing on whether they are any good or not, I know in this position it's best to seek legal advice, but the budget is tight so we really can't afford that. Has anyone had experience with these sites? e.g. http://www.netlawman.co.uk/ecomm-it/website-terms-and-conditions.php?gclid=CPL4g8D3q6cCFQoa4Qodhj5UBg. Thanks

    Read the article

  • How to get ip-address out of SPAMHAUS blacklist?

    - by ???????? ????? ???????????
    I frequently read that it is possible to remove individual ip-addresses from SPAMHAUS blacklisting. OK. Here is 91.205.43.252 (91.205.43.251 - 91.205.43.253) used by back3.stopspamers.com (back2.stopspamers.com, back1.stopspamers.com) in geo-cluster on dedicated servers in Switzerland. The queries: http://www.spamhaus.org/query/bl?ip=91.205.43.251 http://www.spamhaus.org/query/bl?ip=91.205.43.252 http://www.spamhaus.org/query/bl?ip=91.205.43.253 tell that: 91.205.43.251 - 91.205.43.253 are all listed in the SBL80808 blacklist And SBL80808 blacklist tells: "Ref: SBL80808 91.205.40.0/22 is listed on the Spamhaus Block List (SBL) 01-Apr-2010 05:52 GMT | SR04 Spamming and now seems this place is involved in other fraud" 91.205.43.251-91.205.43.253 are not listed amongst criminal ip-addresses individually but there is no way to remove it individually from black listing. How to remove this individual (91.205.43.251-91.205.43.253) addresses from SPAMHAUS blacklist? And why the heck SPAMHAUS is blacklisting spam-stopping service? This is only one example of a bunch. My related posts: Blacklist IP database Update: From the answer provided I realized that my question was not even understood. This ip-addresses 91.205.43.251 - 91.205.43.253 are not blacklisted individually, they are blacklisted through its supernet 91.205.40.0/22. Also note that dedicated server, ISP and customer are in much different distant countries. Update2: http://www.spamhaus.org/sbl/sbl.lasso?query=SBL80808#removal tells: "To have record SBL80808 (91.205.40.0/22) removed from the SBL, the Abuse/Security representative of RIPE (or the Internet Service Provider responsible for supplying connectivity to 91.205.40.0/22) needs to contact the SBL Team" There are dozens of "abusers" in that blacklist SBL80808. The company using that dedicated server is not an ISP or RIPE representative to treat these issues. Even if to treat it, it is just a matter of pressing "Report spam" on internet to be again blacklisted, this is fruitless approach. These techniques are broadly used by criminals and spammers, See also this my post on blacklisting. This is just one specific example but there are many-many more.

    Read the article

  • Blog/CMS software with editing style like Stack Exchange

    - by Merlyn Morgan-Graham
    I have been updating a Wordpress blog lately and found the turnaround time for content creation and editing is much worse than for Stack Overflow posts. Part of this has to do with being original compositions rather than riffing off a question. But part of it is the software. I am looking for CMS/blog software that has an overall editing experience similar to Stack Overflow. The most important features I'm looking for: Inline editing (mostly) Real-time preview on the same page are all important features for speeding up data entry. Markdown support (with inline and block-level code support) Syntax hilighting The features I must maintain from my self-hosted Wordpress: Somewhat popular/supported software, with extensibility support Self hostable Will work with MySql Wordpress has plugins for all these, but they don't necessarily work together. For example I've found a few markdown-on-save plugins, but I doubt those have a chance of ever supporting inline editing or real time previews. Also the most popular syntax hilighting plugins don't support inline code blocks, and I doubt previews would work with other syntax hilighting methods. If I get a wiki/web page content creation system along with it, or somehow integrate this into GitHub (with all the features I requested) I'll accept those as side benefits :) Formed as a question: Are there any pieces of content creation software for making a blog that support an editing style like Stack Exchange and Stack Overflow? Or magic combinations of Wordpress plugins that offer the same?

    Read the article

  • Which Opensource License i release my new Youtube video download Project so no one can ask me that this project is against youtube?

    - by John Smiith
    i have designed new opensource youtube video downloader but download videos from youtube is against youtube How can i make this type of message : This is opensource project that is used to download videos from youtube but downloading videos from youtube is against youtube and use of this project under study purpose only. Or Which Opensource License i release so that no one can ask me that this is against youtube?

    Read the article

  • htaccess code for maintenance page redirect

    - by Force Flow
    I set up a maintenance page that I could enable through an htaccess file. The html file is located in a folder called "maintenance". The html file has some images in it. However, visitors to the page see no images, even though I added a line to allow them. If I try to visit an image in the browser directly, it redirects to the maintenance.htm page. Am I missing something? # Redirects visitors to maintenance page except for specific IP addresses # uncomment lines when redirecting visitors to maintenance page; comment when done. # Also see the section on "redirects visitors from maintenance page to homepage" # #RewriteCond %{REMOTE_ADDR} !^127.0.0.1$ #RewriteCond %{REMOTE_ADDR} !^111.111.111.111$ RewriteCond %{REQUEST_URI} !/maintenance/maintenance\.htm$ [NC] RewriteCond %{REQUEST_URI} !\.(jpg|jpeg|png|gif|css|ico)$ [NC] RewriteRule ^(.*)$ /maintenance/maintenance.htm [R=302,L] # # end redirects visitors to maintenance page # Redirects visitors from maintenance page to homepage # comment lines when redirecting visitors to maintenance page; uncomment when done # #Redirect 301 /maintenance/maintenance.htm / # # end redirects visitors from maintenance page to homepage

    Read the article

  • Which comes first DNS or name servers?

    - by Thomas Clayson
    Which comes first? DNS or nameservers? Its just I'm editing a domain and want to point it to different hosting servers. Now, normally I'd just set the nameservers to the new hosting nameservers, but this doesn't seem to be working (the control panels not brilliant - doesn't say "accepted" or "success" - and whois searches are turning up nothing - although I should leave it a while longer). Anyway - the point is, in the DNS records there are two A records (one for " " and one for "www") which have an ip address associated with them. So this is what I don't understand. If I change the nameservers, do I have to remove the DNS records? Will name servers "overwrite" a DNS record? Or vice versa? Thank you.

    Read the article

  • How do I block a user-agent from Apache

    - by rubo77
    How do I realize a UA string block by regular expression in the config files of my Apache webserver? For example: if I would like to block out all bots from Apache on my debian server, that have the regular expression /\b\w+[Bb]ot\b/ or /Spider/ in their user-agent. Those bots should not be able to see any page on my server and they should not appear neither in the accesslogs nor in the errorlogs. http://global-security.blogspot.de/2009/06/how-to-block-robots-before-they-hit.html supposes to uses mod_security for that, but isn't there a simple directive for http.conf?

    Read the article

  • The number of soft 404 errors is increasing because of redirects to the home page

    - by Stevie G
    I have an increase in soft 404 errors. Using Apache in my .htaccess file I have: Redirect 301 /test.html? /page/pop/test Redirect 301 /about.html? /about I have also tried: Redirect 301 http://www.example.co.za/test.html? http://www.example.co.za/services/test however whenever I go to: http://www.example.co.za/test.html http://www.example.co.za/about.html it just redirects to the home page I also have: RewriteRule ^.*$ index.php [NC,L] in .htaccess

    Read the article

  • Issues with timed out downloads via TomCat?

    - by Ira Baxter
    We get, in our opinion, a lot of failed download attempts and want to understand why. We offer downloads via an email link (typical): http://www.semanticdesigns.com/deliverEval/<productname> This is processed by Tomcat on Linux via a jsp file, with the following code: response.addHeader( "Content-Disposition", "attachment; filename=" + fileTail ); response.addHeader( "Content-Type", "application/x-msdos-program" ); byte[] buf = new byte[8192]; int read; try { java.io.FileInputStream input = new java.io.FileInputStream( filename ); java.io.OutputStream o = response.getOutputStream(); while( ( read = input.read( buf, 0, 8192 ) ) != -1 ){ o.write( buf, 0, read ); } o.flush(); } catch( Exception e ){ util.fatalError( request.getRequestURI(), "Error sending file '" + filename + "' to client", e ); throw e; } We get a lot of reported errors (about 50% error rate): URI --- /deliverEval/download.jsp Code Message: Error sending file '/home/sd/ShippingMasters/DMS/Domains/C/GCC3/Tools/TestCoverage/SD_C~GCC3_TestCoverage.1.6.12.exe' to client Stack Trace ----------- null at org.apache.coyote.tomcat5.OutputBuffer.realWriteBytes(byte[], int, int) (Unknown Source) at org.apache.tomcat.util.buf.ByteChunk.append(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.OutputBuffer.writeBytes(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.OutputBuffer.write(byte[], int, int) (Unknown Source) at org.apache.coyote.tomcat5.CoyoteOutputStream.write(byte[], int, int) (Unknown Source) at org.apache.jsp.deliverEval.download_jsp._jspService(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at org.apache.jasper.runtime.HttpJspBase.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at javax.servlet.http.HttpServlet.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.jasper.servlet.JspServletWrapper.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, boolean) (Unknown Source) at org.apache.jasper.servlet.JspServlet.serviceJspFile(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse, java.lang.String, java.lang.Throwable, boolean) (Unknown Source) at org.apache.jasper.servlet.JspServlet.service(javax.servlet.http.HttpServletRequest, javax.servlet.http.HttpServletResponse) (Unknown Source) at javax.servlet.http.HttpServlet.service(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.ApplicationFilterChain.doFilter(javax.servlet.ServletRequest, javax.servlet.ServletResponse) (Unknown Source) at org.apache.catalina.core.StandardWrapperValve.invoke(org.apache.catalina.Request, org.apache.catalina.Response, org.apache.catalina.ValveContext) (Unknown Source) We don't understand why this rate should be so high. Is there any way to get more information about the cause of the error? It is useful to know that these are pretty big documents, 3-50 megabytes. They reside on the Linux server so reading them is just a local disk read, and is unlikely to be a contributor to the problem. But sheer size might be an issue for the recipients browser? Is this kind of error rate typical for downloads? My personal experience downloading other's documents suggests no; our internal attempts show this to be very reliable, but we're operating on our internal network for such experiments so we're missing the complexity of the intervening internet.

    Read the article

  • www.foobar.com works but foobar.com results in a 'Server not found' error

    - by Homunculus Reticulli
    I have just setup a minimal (hopefully secure? - comments welcome) apache website using the following configuration file: <VirtualHost *:80> ServerName foobar.com ServerAlias www.foobar.com ServerAdmin [email protected] DocumentRoot /path/to/websites/foobar/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foobar.errors.log" <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I am able to access the website by using www.foobar.com, however when I type foobar.com, I get the error 'Server not found' - why is this? My second question concerns the security implications of the directive: <Directory /path/to/websites/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> in the configuration above. What exactly is it doing, and is it necessary?. From my (admitedly limited) understanding of Apache configuration files, this means that anyone will be able to access (write to?) the /path/to/websites/ folder. Is my understanding correct? - and if yes, how is this not a security risk?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >