Search Results

Search found 55652 results on 2227 pages for 'http response'.

Page 509/2227 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Repeater vs. ListView

    - by MoezMousavi
    I do really hate repeater. I was more a GridView lover but after 3.5 be born, I prefer ListView.  The first problem with Repeater is paging. You will need to write code to handle paging. Second common problem is empty data template. Have a look at this:             if (rptMyRepeater.Items.Count < 1)             {                 if (e.Item.ItemType == ListItemType.Footer)                 {                     Label lblFooter = (Label)e.Item.FindControl("lblEmpty");                     lblFooter.Visible = true;                 }             }   I found the above code is usefull if you need to show something like "There is no record" is your data source has no records. Although the ListView has a template.   If you combine ListView with a DataPager, you will be in heaven as it is sorting the paging for you without writing code. (http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.datapager.aspx)     Note: You have got to bind ListView in PreRender, it doesn't work properly in PageLoad   More: http://www.4guysfromrolla.com/articles/061009-1.aspx

    Read the article

  • is it ok to have 2 sitemaps on 1 website?

    - by user615041
    Do I have to have a sitemap page on my index page for bots to read it or can I just have it anywhere on my server? I have a phpbb/wordpress integration and I need 2 sitemaps mods for each one (or I need to have them somehow integrated together into one xml sitemap). Is this possible? Whats my best option? I would have the phpbb one something like this: http://www.example.com/phpbb/sitemap.html and the wordpress one something like this: http://www.example.com/wordpress/sitemap.html and then I would submit both off..but not have the links on my footer to confuse anyone.., the sitemaps would strictly be for search engines. Is this a good idea? what are you thoughts?

    Read the article

  • Convert Currencies Dynamically using PHP, Google and cURL [closed]

    - by LizO
    I want to be able to allow users to dynamically change the currency of the products prices in my webstore, right there on the page. For example, 300 USD will change to 221.61 EUR when the user selects Euros from a dropdown. I found a few sites with PHP code for a calculator/input format (user inputs value and converted currency is output.) http://www.chazzuka.com/blog/?p=104 http://www.pixel2life.com/publish/tutorials/1166/currency_conversion_in_php/page-3/ I was hoping someone could help me figure out how to modify the PHP script. Thanks in advance.

    Read the article

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • Wordpress Multisite (Subfolders) - Google Analytics Tracking

    - by mmundiff
    I have a Wordpress multisite subfolder instance that I would like to track via Google Analytics. I guess optimally this would be a plugin which I could track each site in two places The Main Tracking Code which totals all traffic from the Multisite instance The Individual Site tracking code to see how each site specifically is doing. I think this plugin would have worked for me if I had a subdomain multisite instance: http://wordpress.org/extend/plugins/google-analytics-multisite-async/installation/ I know I can manually place the dual tracking code (http://www.markinns.com/articles/full/adding_two_google_analytics_accounts_to_one_page) but that would involve editing a theme and I have multiple sites using TwentyEleven template. I don't think I can edit the theme and not have it wreak havoc on the rest of the sites using TwentyEleven. So has anyone done this? Is there a a technique I'm missing? Is there a plugin available to do this in Multisite Subfolder installations? Is there a way to manually insert GA codes into themes which are used by multiple sites? Any insight is appreciated.

    Read the article

  • Failing to install Ubuntu 13.04

    - by Kayven Riese
    I have a new Windows 8 Sony Vaio SVF14A15CXB laptop that has UEFI and I have been struggling through an Ubuntu installation. I have a bootable Ubuntu DVD+R and I have managed to mess with my BIOS/UEFI so that it boots. I have used Windows 8 to create a desired hard disk partition and installed Ubuntu there, and have burned a boot-repair http://sourceforge.net/p/boot-repair-cd/home/Home/ and rEFInd http://www.rodsbooks.com/refind/getting.html disk and neither will boot. I know I should continue googling and struggling, but I am getting frustrated. Thanks to anyone who gives me the time of day.

    Read the article

  • Getting a 404 when using the Nexus 7 installer PPA, how do I fix this? [duplicate]

    - by Vitaliy
    This question already has an answer here: How can I fix a 404 Error when using a PPA? 2 answers ubuntu 13.10 sudo add-apt-repository ppa:ubuntu-nexus7/ubuntu-nexus7-installer OK sudo apt-get update: W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-amd64/Packages 404 Not Found W: ?? ??????? ???????? http://ppa.launchpad.net/ubuntu-nexus7/ubuntu-nexus7-installer/ubuntu/dists/saucy/main/binary-i386/Packages 404 Not Found Thanks for the answer

    Read the article

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • I want to try and find an RFC for Business Listings.

    - by nc01
    I'm trying to figure out how to find out if there's a good standard format for sharing business information such as: Business Name Address - well-defined fields Lat,Lng Coords Business Type - maybe from a well-defined enum, my starting point contains Retail,Food,Drink,Coffee,Service Hours of operation - including a spot for 'except laksdasd' or 'sometimes we open late' which could be just plain language Business Keywords - don't know if this is asking too much. how well do http meta tags work in practice? So, if no such thing exists, is this something I can submit to IETF? I can't currently find it on http://www.rfc-editor.org/cgi-bin/rfcsearch.pl , and vCard doesn't suit my needs.. Thanks!

    Read the article

  • .htaccess 301 Redirect for wildcard subdomains

    - by Steve
    I run Wordpress in Network mode, which means I can have multiple websites running off one installation of Wordpress. Each website runs as a subdomain. Wordpress handles this using .htaccess, and a wildcard subdomain pointing to the location of Wordpress, so there are no actual subdomains created in cPanel; just a wildcard subdomain in cPanel, ad Wordpress handles the rest. I want to 301 redirect http://one.example.com/portfolio to http://two.example.com/portfolio. If I only have 1 .htaccess file in the web root of example.com, how do I achieve this?

    Read the article

  • What is the SEO-recommended method for using underscores and dashes in URLs that contain geographic locations?

    - by ElHaix
    In reading through this article: In Subfolder & File Names, Use Dashes, Not Underscores Good: Good: http://www.domain.com/sub-folder/file-name.htm Bad: http://www.domain.com/sub_folder/file_name.htm In my URL's, I may have one or two city names, ending with the province/state: Burnaby_New_Westminister-BC/[some search term]. My URL rules currently are defined such that everything after the dash is the prov/state. Some geographic locations already contain dashes: Notre-Dame-de-Grâce (in QC), which I would convert to ~/Notre_Dame_de_Grace-QC/ I thought of placing the prov/state after another "/", however in some cases the province/state name may not exist, thus ~/Notre_Dame_de_Grace/, so the first term after the domain name contains the geo location {city, city_name-state}. I am now revisiting this, and wondering if this rule set should change, and if so, what is the recommended way of implementing this? -- UPDATE -- After reviewing this video, I see that I should be using the dashes, rather than underscores. However since I still want to have my geo locations in the first URL section, is there anything wrong with using a double-dash separator - ie: /city-name--state/ ?

    Read the article

  • Problems after installing Ubuntu LiveUSB on my 2nd HD

    - by user113106
    I decided to create a Ubuntu USB installer using the Universal USB Installer, selecting a Ubuntu 12.10 ISO. I selected my D: drive, my NON-windows7 carrying drive as install. After this I Re-booted my system and my PC began to Run the Ubuntu Boot-startscreen every time I power up the machine, giving errors like this: Root=Unknown I used my girlfriend's laptop to create, on the exact same way, a real! USB Ubuntu installer. Booting from that USB, picking the option Run Ubuntu from this USB I get the following error: http://postimage.org/image/63qkv98c1/ Let's try installing it from that USB to a Hard-Disk: http://postimage.org/image/usqbwymfx/ As I said: I do not have the Option to pick my Boot-section, at this very moment I can only access to the Ubuntu installation and nothing else. I've read about 90% of other Questions that could've been related, but I could not find a solution. BTW I'm running a Acer Aspire 2Quadcore 4gb DDR 2 Ati Radeon HD, 64bit and I've set my Bios OS-usage from Windows to "Others"

    Read the article

  • Integrating with a payment provider; Proper and robust OOP approach

    - by ExternalUse
    History We are currently using a so called redirect model for our online payments (where you send the payer to a payment gateway, where he inputs his payment details - the gateway will then return him to a success/failure callback page). That's easy and straight-forward, but unfortunately quite inconvenient and at times confusing for our customers (leaving the site, changing their credit card details with an additional login on another site etc). Intention & Problem description We are now intending to switch to an integrated approach using an exchange of XML requests and responses. My problem is on how to cater with all (or rather most) of the things that may happen during processing - bearing in mind that normally simplicity is robust whereas complexity is fragile. Examples User abort: The user inputs Credit Card details and hits submit. An XML message to the provider's gateway is sent and waiting for response. The user hits "stop" in his browser or closes the window. ignore_user_abort() in PHP may be an option - but is that reliable? might it be better to redirect the user to a "please wait"-page, that in turn opens an AJAX or other request to the actual processor that does not rely on the connection? Database goes away sounds over-complicated, but with e.g. a webserver in the States and a DB in the UK, it has happened and will happen again: User clicks together his order, payment request has been sent to the provider but the response cannot be stored in the database. What approach could I use, using PHP to sort of start an SQL like "Transaction" that only at the very end gets committed or rolled back, depending on the individual steps? Should then neither commit or roll back have happened, I could sort of "lock" the user to prevent him from paying again or to improperly account for payments - but how? And what else do I need to consider technically? None of the integration examples of e.g. Worldpay, Realex or SagePay offer any insight, and neither Google or my search terms were good enough to find somebody else's thoughts on this. Thank you very much for any insight on how you would approach this!

    Read the article

  • URL slugs: ideal length, and the real SEO effects of these slugs

    - by tattvamasi
    this question is addressed widely on SO and outside it, but for some reason, instead of taking it as a good load of great advice, all this information is confusing me. ** Problem ** I already had, on one of my sites, "prettified" urls. I had taken out the query strings, rewritten the URLS, and the link was short enough for me, but had a problem: the ID of the item or post in the URL isn't good for users. One of the users asked is there's a way to get rid of numbers, and I thought it was better for users to just see a clue of the page content in the URL. ** Solution ** With this in mind, I am trying with a section of the site.Armed with 301 redirects, some parsing work, and a lot of patience, I have added the URL slugs to some blog entries, and the slug of the URL reports the title of the article (something close to http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ ** Problems after Solution ** The problem, as I see it, is that now the URL of those blog articles is very descriptive for sure, but it is also impossible to remember. So, this brings me to the same issue I had with my previous problem: if numbers say nothing and can't be remembered, what's the use of these slugs? I prefer to see http://example.com/my-news/1/ than http://example.com/my-news/terribly-boring-and-long-url-that-replaces-the-number-I-liked-so-much/ To avoid forcing my user to memorize my URLS, I have added a script that finds the closest match to the URL you type, and redirects there. This is something I like, because the page now acts as a sort of little search engine, and users can play with the URLS to find articles. ** Open questions ** I still have some open questions, and don't seem to be able to find an answer, because answers tend to contradict one another. 1) How many characters should an URL ideally be long? I've read the magic number 115 and am sticking to that, but am not sure. 2) Is this really good for SEO? One of those blog articles I have redirected, with ID number in the URL and all, ranked second on Google. I've just found this question, and the answer seems to be consistent with what I think URL slug and SEO - structure (but see this other question with the opposite opinion) 3) To make a question with a specific example, would this URL risk to be penalized? Is it acceptable? Is it too long? StackOverflow seems to have comparably long URLs, but I'm not sure it's a winning strategy in my case. I just wanted to facilitate my users without running into Google's algorithms.

    Read the article

  • IBM Reinvents x86 Platform with eX5 Servers

    The amount of data involved in the average Web-based workload today doubles every year, increasing costs and straining IT resources. The traditional response to this dilemma from IT organizations is to throw more servers at the problem, which furthers server sprawl and increases power and management costs. As a result, the typical x86 server is only running at 10 percent utilization.

    Read the article

  • IBM Reinvents x86 Platform with eX5 Servers

    The amount of data involved in the average Web-based workload today doubles every year, increasing costs and straining IT resources. The traditional response to this dilemma from IT organizations is to throw more servers at the problem, which furthers server sprawl and increases power and management costs. As a result, the typical x86 server is only running at 10 percent utilization.

    Read the article

  • Should I Learn C/C++ Even If I Just Want To Do Web Programming?

    - by Daniel
    My goal is to be able to create online apps and dynamic, database driven websites. For instance, if in the future I get the idea for the next Digg or Facebook, I want to be able to code it myself. To arrive there I think I have basically two paths: Path 1 Start at a basic level, learning C, then C++ for OOP, then algorithms and data structures, with the goal of getting a solid grasp of computer programming. Only then move to PHP/MySQL/HTTP and start working on practical programming projects. Path 2 Start directly with PHP/MySQL/HTTP and getting my hands dirty with practical projects right away. What would you guys recommend?

    Read the article

  • problem showing my website correctly in search engines

    - by dinbrca
    Hello guys, I have a website which i have indexed on google for example (like 15 days ago). some of my pages pass arguments like: http://www.bla.com/products.php?pro=bla&page=view suddently i saw that passing arguments like this isn't good for SEO purposes and started using htaccess rewrite. and changed the arguments to like this: http://www.bla.com/products/bla/*view*/ now my site on google still shows as i showed at link number 1 what should i do? i thought i should wait for the search engine to crawl my site again but nothing happened. thanks in advanced, Din

    Read the article

  • How do I install drivers for a Konica Minolta 200?

    - by th3pr0ph3t
    This copy machine / scanner / network printer works with Windows but no drivers are available for linux. When Ubuntu supports a printer it works fine but this one is not supported. I found the drivers in: http://onyxftp.mykonicaminolta.com/download/SearchResults.aspx?productname=bizhub%20200 //But I don't know how to install them, nor which one to download. //How can I install this driver? EDIT: The file with the driver is here http://onyxftp.mykonicaminolta.com/DownloadFile/Download.ashx?fileid=18571&productid=865 Inside the archive there is a .deb package that installs correctly but doesn't work. So far the question is: "How can I make it work?"

    Read the article

  • How to create a JMS durable subscriber in WebLogic Server?

    - by lmestre
    WebLogic Server Provides a set of examples that are very helpful to get started with Weblogic ServerHere you can check how to install the examples:http://docs.oracle.com/cd/E23943_01/doc.1111/e14142/prepare.htmAfter you have installed the examples, you can find the example you want to review, in this case TopicReceive, here:wlserver_10.3/samples/server/examples/src/examples/jms/topicTo review details of the specific example, you can open:wlserver_10.3/samples/server/examples/src/examples/jms/topic/instructions.htmlTo create a Durable Subscriber, you can just set the client ID  and invoke createDurableSubscriber instead of calling createSubscriber, i.e.:    tconFactory = (TopicConnectionFactory)       PortableRemoteObject.narrow(ctx.lookup(JMS_FACTORY),                                   TopicConnectionFactory.class);    tcon = tconFactory.createTopicConnection();    //Set Client ID for this Durable Subscriber    tcon.setClientID("GT2");    tsession = tcon.createTopicSession(false, Session.AUTO_ACKNOWLEDGE);    topic = (Topic)       PortableRemoteObject.narrow(ctx.lookup(topicName),                                   Topic.class);    // Create Durable Subscription    tsubscriber = tsession.createDurableSubscriber(topic, "Test");    tsubscriber.setMessageListener(this);    tcon.start(); Enjoy!   You can read more about this here:http://docs.oracle.com/cd/E23943_01/web.1111/e13727/advpubsub.htm#CHDEBABChttp://docs.oracle.com/cd/E23943_01/web.1111/e13727/manage_apps.htm#i1097671    http://docs.oracle.com/cd/E23943_01/apirefs.1111/e13943/WebLogic.Messaging.ISession.CreateDurableSubscriber_overload_2.html

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • How to pass information across domains to ask for newsletter only once?

    - by Michal Stefanow
    Lets assume following scenario, I have two sites: example1.com example2.com When user visits 1 there is a prompt "please signup to a newsletter". Same thing happens when user visits 2. However when navigating from 1 to 2 I don't want signup form to be shown. My first thought were 3rd-party cookies, but it seems that they are blocked / not working: http://stackoverflow.com/questions/4701922/how-does-facebook-set-cross-domain-cookies-for-iframes-on-canvas-pages?rq=1 http://stackoverflow.com/questions/172223/how-do-i-set-cookies-from-outside-domains-inside-iframes-in-safari?rq=1 Another thought is to append #noshow for each URL but that would require some work - for instance a script that would intercept click / tap events and modify URL structure depending on the address. (but that seems hacky) I wonder if you know a robust well-established solution to this issue? Thanks

    Read the article

  • What To Do w Driver Which is "Activated But Not Currently in Use"

    - by John
    Ubuntu 12.04, Graphics Card: GeForce-4 MX 420, From Nvidia Website, Downloaded NVIDIA-Linux-x86-96.43.18.pkg1.run Moved it from Downloads Folder to "Nvidia Folder" I searched for "Additional Drivers" I got a response that Nvidia-(Current) was "Activated But Not Currently in Use" 6a. Did I download it incorrectly? 6b. Did I Move, Open or Install it incorrectly? 6c. Want to update driver to improve rendering 6d. Not sure what to do next

    Read the article

  • Footer not showing in website depending on which item is loaded [on hold]

    - by samyb8
    I designed a website which is having an issue, but I checked the html tagging very well and cannot fix it. If you go to this item: http://www.tahara.es/store/headbands/11/Ivory-Turquoise-headband You will see the FOOTER display normally. However if you go to this other item: http://www.tahara.es/store/headscarves/15/Grey-and-ivory-with-stoned-flower-Headscarf The footer does not show. Any clue of what I am missing or adding? The footer DIV is like this: <div id="footer">

    Read the article

  • BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >