Search Results

Search found 50062 results on 2003 pages for 'http'.

Page 406/2003 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • Directory access control with Apache: do I need to use a specific .htaccess?

    - by Mirror51
    I have an Apache webserver, and in the Apache configuration, I have Alias /backups "/backups" <Directory "/backups"> AllowOverride None Options Indexes Order allow,deny Allow from all </Directory> I can access files via http://127.0.0.1/backups. The problem is everyone can access that. I have a web interface, e.g. http://localhost/adminm that is protected with htaccess and password. Now I don't want separate .htaccess and .htpasswd for /backups, and I don't want a second password prompt when a user clicks on /backups in the web interface. Is there any way to use same .htaccess and .htpasswd for the backups directory?

    Read the article

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • Getting a 404 when using the Nexus 7 installer PPA, how do I fix this? [duplicate]

    - by Vitaliy
    This question already has an answer here: How can I fix a 404 Error when using a PPA? 2 answers ubuntu 13.10 sudo add-apt-repository ppa:ubuntu-nexus7/ubuntu-nexus7-installer OK sudo apt-get update: W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-amd64/Packages 404 Not Found W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found Thanks for the answer

    Read the article

  • Wordpress Multisite (Subfolders) - Google Analytics Tracking

    - by mmundiff
    I have a Wordpress multisite subfolder instance that I would like to track via Google Analytics. I guess optimally this would be a plugin which I could track each site in two places The Main Tracking Code which totals all traffic from the Multisite instance The Individual Site tracking code to see how each site specifically is doing. I think this plugin would have worked for me if I had a subdomain multisite instance: http://wordpress.org/extend/plugins/google-analytics-multisite-async/installation/ I know I can manually place the dual tracking code (http://www.markinns.com/articles/full/adding_two_google_analytics_accounts_to_one_page) but that would involve editing a theme and I have multiple sites using TwentyEleven template. I don't think I can edit the theme and not have it wreak havoc on the rest of the sites using TwentyEleven. So has anyone done this? Is there a a technique I'm missing? Is there a plugin available to do this in Multisite Subfolder installations? Is there a way to manually insert GA codes into themes which are used by multiple sites? Any insight is appreciated.

    Read the article

  • Problems after installing Ubuntu LiveUSB on my 2nd HD

    - by user113106
    I decided to create a Ubuntu USB installer using the Universal USB Installer, selecting a Ubuntu 12.10 ISO. I selected my D: drive, my NON-windows7 carrying drive as install. After this I Re-booted my system and my PC began to Run the Ubuntu Boot-startscreen every time I power up the machine, giving errors like this: Root=Unknown I used my girlfriend's laptop to create, on the exact same way, a real! USB Ubuntu installer. Booting from that USB, picking the option Run Ubuntu from this USB I get the following error: http://postimage.org/image/63qkv98c1/ Let's try installing it from that USB to a Hard-Disk: http://postimage.org/image/usqbwymfx/ As I said: I do not have the Option to pick my Boot-section, at this very moment I can only access to the Ubuntu installation and nothing else. I've read about 90% of other Questions that could've been related, but I could not find a solution. BTW I'm running a Acer Aspire 2Quadcore 4gb DDR 2 Ati Radeon HD, 64bit and I've set my Bios OS-usage from Windows to "Others"

    Read the article

  • How can I create multiple mini-sites with similar/duplicate content without hurting my search engine rank?

    - by ekpyrotic
    Essential background: I run a small company that lets members of the public post handwritten letters to their local politician (UK-based). Every week a number of early stage bills (called Early Day Motions) are submitted for debate in the House of Commons, and supporters of the motion will contact their local Members of Parliament, asking them to sign the motion. The crux: I want to target these EDMs with customised mini-sites, so when people search "EDM xxx", they find my customised mini-site, specifically targeting that EDM (i.e., "Send a handwritten letter to your MP asking them to sign EDM xxx"). At the moment, all these mini-sites (and my homepage) have duplicate content with only the relevant EDM name, number, and background image changed. (For example, http://mailmymp.com and http://mailmymp.com/edm/teaching-life-saving-skills-at-school-edm-550.php). The question: Firstly, will this hurt my potential search engine ranking? And, if so, what's the best way to target these political campaigns in an efficient manner without hurting my SEO prospects?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Is there a way I can sort traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools??

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that i can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Does Google Tag Manager help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york Thanks all!

    Read the article

  • Weve Moved!

    Ive moved my blog to a new home at http://SilverlightGeek.me.  Within a few days, all the links here should automatically redirect you there. Thanks for sticking with me and I look forward to your feedback.             ( You can also reach the new blog at http://slgeek.com/wordpress )This work is licensed under a Creative Commons license. ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Redirect packages directed to port 5000 to another port

    - by tdc
    I'm trying to use eboard to connect to the FICS servers (http://www.freechess.org), but it fails because port 5000 is blocked (company firewall). However, I can connect to the server through the telnet port (23): telnet freechess.org 23 (succeeds) telnet freechess.org 5000 (fails) Unfortunately the port number is hardcoded (see here: http://ubuntuforums.org/archive/index.php/t-1613075.html). I'd rather not have to hack the source code as the author of that thread ended up doing. Can I just forward the port on my local machine using iptables? I tried: sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 5000 -j REDIRECT --to-port 23 and sudo iptables -t nat -I OUTPUT --src 0/0 -p tcp --dport 5000 -j REDIRECT --to-ports 23 but these didn't work... Note that: $ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination REDIRECT tcp -- anywhere anywhere tcp dpt:5000 redir ports 23 Chain POSTROUTING (policy ACCEPT) target prot opt source destination

    Read the article

  • Failing to install Ubuntu 13.04

    - by Kayven Riese
    I have a new Windows 8 Sony Vaio SVF14A15CXB laptop that has UEFI and I have been struggling through an Ubuntu installation. I have a bootable Ubuntu DVD+R and I have managed to mess with my BIOS/UEFI so that it boots. I have used Windows 8 to create a desired hard disk partition and installed Ubuntu there, and have burned a boot-repair http://sourceforge.net/p/boot-repair-cd/home/Home/ and rEFInd http://www.rodsbooks.com/refind/getting.html disk and neither will boot. I know I should continue googling and struggling, but I am getting frustrated. Thanks to anyone who gives me the time of day.

    Read the article

  • .htaccess 301 Redirect for wildcard subdomains

    - by Steve
    I run Wordpress in Network mode, which means I can have multiple websites running off one installation of Wordpress. Each website runs as a subdomain. Wordpress handles this using .htaccess, and a wildcard subdomain pointing to the location of Wordpress, so there are no actual subdomains created in cPanel; just a wildcard subdomain in cPanel, ad Wordpress handles the rest. I want to 301 redirect http://one.example.com/portfolio to http://two.example.com/portfolio. If I only have 1 .htaccess file in the web root of example.com, how do I achieve this?

    Read the article

  • is it ok to have 2 sitemaps on 1 website?

    - by user615041
    Do I have to have a sitemap page on my index page for bots to read it or can I just have it anywhere on my server? I have a phpbb/wordpress integration and I need 2 sitemaps mods for each one (or I need to have them somehow integrated together into one xml sitemap). Is this possible? Whats my best option? I would have the phpbb one something like this: http://www.example.com/phpbb/sitemap.html and the wordpress one something like this: http://www.example.com/wordpress/sitemap.html and then I would submit both off..but not have the links on my footer to confuse anyone.., the sitemaps would strictly be for search engines. Is this a good idea? what are you thoughts?

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • How do I install drivers for a Konica Minolta 200?

    - by th3pr0ph3t
    This copy machine / scanner / network printer works with Windows but no drivers are available for linux. When Ubuntu supports a printer it works fine but this one is not supported. I found the drivers in: http://onyxftp.mykonicaminolta.com/download/SearchResults.aspx?productname=bizhub%20200 //But I don't know how to install them, nor which one to download. //How can I install this driver? EDIT: The file with the driver is here http://onyxftp.mykonicaminolta.com/DownloadFile/Download.ashx?fileid=18571&productid=865 Inside the archive there is a .deb package that installs correctly but doesn't work. So far the question is: "How can I make it work?"

    Read the article

  • Windows 8 Location Services

    - by ryanabr
    I spent the afternoon with the Geolocator object in the WinRT and Widows 8 platform. I have also been working with doing Windows Phone 7 development, and first had to wrap my head around the fact that while similar, it is not the same as the GeoCoordinateWatcher that environment. I found a nice example here http://code.msdn.microsoft.com/windowsapps/Geolocation-2483de66 But the behavior of my app wasn’t the same. Once you ensure that location services is enabled by following these instructions: http://msdn.microsoft.com/en-us/library/windows/desktop/hh768219.aspx Location Services was still disabled. From everything I read, it sounded like the first time you try to use the Geolocator object, the user would be prompted to allow to “Access to your location”. After nosing around I found the issue. You need to add the location service as a Capability in the Package.appxmanifest file: After checking the box, I was prompted to allow access to location services as expected the first time I needed to use it.

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • I want to try and find an RFC for Business Listings.

    - by nc01
    I'm trying to figure out how to find out if there's a good standard format for sharing business information such as: Business Name Address - well-defined fields Lat,Lng Coords Business Type - maybe from a well-defined enum, my starting point contains Retail,Food,Drink,Coffee,Service Hours of operation - including a spot for 'except laksdasd' or 'sometimes we open late' which could be just plain language Business Keywords - don't know if this is asking too much. how well do http meta tags work in practice? So, if no such thing exists, is this something I can submit to IETF? I can't currently find it on http://www.rfc-editor.org/cgi-bin/rfcsearch.pl , and vCard doesn't suit my needs.. Thanks!

    Read the article

  • Footer not showing in website depending on which item is loaded [on hold]

    - by samyb8
    I designed a website which is having an issue, but I checked the html tagging very well and cannot fix it. If you go to this item: http://www.tahara.es/store/headbands/11/Ivory-Turquoise-headband You will see the FOOTER display normally. However if you go to this other item: http://www.tahara.es/store/headscarves/15/Grey-and-ivory-with-stoned-flower-Headscarf The footer does not show. Any clue of what I am missing or adding? The footer DIV is like this: <div id="footer">

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >