Search Results

Search found 45245 results on 1810 pages for 'html content extraction'.

Page 624/1810 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Making simple forms in web applications

    - by levalex
    How do you work with forms in your web applications? I am not talking about RESTful applications, I don't want to build heavy front-end using frameworks like Backbone. For example, I need to add "contact us" form. I need to check data which was filled by user and tell him that his data was sent. Requirements: I want to use AJAX. I want to validate form on back-end side and don't want to duplicate the same code on front-end side. I have my own solution, but it doesn't satisfy me. I make an AJAX request with serialized data on form submit and get response. The next is checking "Content-type" header. html - It means that errors with filling form are exists and response html is form with error labels. - I will replace my form with response html. json and response.error_code == 0 - It means that form was successfully submited. - I will show user notification about success. json and response.error_code != 0 - Something was broken on back-end (like connection with database). other - I display the following message : We have been notified and have started to work with that problem. Please, try it later. The problem of that way is that I can't use it with forms that upload file. What is your practise? What libraries and principles do you use?

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • Saving Dragged Dropped items position on postback in asp.net [closed]

    - by Deeptechtons
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • Exim redirect all unexisting accounts for local domains to a specific account

    - by tntu
    I want to route all incoming emails for local domains only to a single account if an account is not setup for that user. I would also like each email to be written in it's own file in user folder. I have a catchall user with /home/catchall/ path where I have a mail folder made for this but so far emails wither fail to deliver (thus my rule did not work) or they do deliver to /etc/mail/catchall file. I have been trying to put something together from the Exim configuration but so far nothing seem to work. http://exim.org/exim-html-current/doc/html/spec_html/ch20.html

    Read the article

  • How to rotate html5 canvas as page background?

    - by Sebastian P.R. Gingter
    Hi, I want to achieve the following: Image a white sheet of paper on a black desk. Then rotate the paper a little bit to the left (like, 25 degrees). Now you still have the black desk, and a rotated white box on it. In this rotated white box I want to place non-rotated normal html content like text, tables, div's etc. I already have a problem at the very first step: rotating a rectangle. This is my code so far: <html> <head> <script> function draw() { var canvas=document.getElementById("myCanvas"); var c=canvas.getContext("2d"); c.fillStyle = '#00'; c.fillRect(100, 100, 100, 100); c.rotate(20); c.fillStyle = '#ff0000'; c.fillRect(150, 150, 10, 10); } </script> </head> <body onload="draw()"> <canvas id="myCanvas" width="500" height="500"></canvas> </body> </html> With this, I see only a normal black box. Nothing else. I assume there should be a red, rotated box too, but there's nothing. What is the best approach to reach this and to have it as a (scaling) background for my web page?

    Read the article

  • SQL restore from single file db to filegroup

    - by Mauro
    I have a 180GB MOSS 2007 database whose maintenance (i.e. backups and restores) are becoming a problem. Part of the issue can be resolved by splitting the three content sites down into their own site collections, however this will likely still leave me with a 100gb DB to deal with. Whilst this isnt entirely problematic for SQL it does mean that backups / restores take far too long. my idea is to split each of the databases to 30gb files, then to import the content into them which should distribute the content across the file groups,making it much easier / faster to backup/restore. Is there a way to backup from a single file and restore to a filegroup? If i have the wrong understanding of filegroups then I'm more than happy to find out other methods of managing the size of databases.

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

    Read the article

  • Recover a deleted webpage

    - by rc
    Suppose, a blog or a nice article was hosted on a website and it got deleted or worse the website was brought down. How do you view that web page? I tried searching for the cached version in Google. But, looks like the content was deleted long ago and is not listed in the search results directly. There are annotations to the link from many other sites, but still the actual content is not fully available. Now, can anybody help me see this page... I am actually looking for http://effectize.com/become-coolest-programmer :) And, moreover, in addition to bookmarking a favorite link, is it possible to cache the content of the link as well for later reference in case it gets deleted? EDIT: Looks like a URL can be cached for future reference. Try: http://backupurl.com/

    Read the article

  • I want to be a programmer, work in corporate environment, earn well, learn fast and eventually become a great programmer [on hold]

    - by Shin San
    I'll try to keep this simple: I'm 29, been dabbling with computers for the past 10 years, had entry level jobs in tech support for different apps, been fixing computers for a while and now want to specialize in something. I'm not 100% stranger to programming but haven't gone past if/then/else with anything. A bit of JavaScript, PHP, Python and currently checking out the "SELECT" statement in SQL :)) I'm curious about programming, I enjoy it and I'm thinking of making a living out of it. So, while I'm at it, why not earn a bit more than the average Joe? So, that's why I'm checking what the best solution, the best learning path and the most useful languages are considering: a) how easy/fast can you find a job by knowing it b) how much would I be able to earn c) how fast can I learn it By reading 10-20 articles online I've come up with an example, but I'm here for some expert advice. Example: * ratings from a) and b) point of view #1 sql ; #2 java ; #3 html (please don't start the markup language debate) ; #4 javascript From this ratings, I'd say a good way to go is learn html/css/(javascript or php) for the web part of apps, some SQL/MySQL/whateverSQL for holding data and loads of Java for the program itself. Please let me know if this is a good idea and if so, what should be the order for learning all of the above. Else, please let me know a better way and why it would be better. Many thanks for taking the time to read my question. Best wishes to you guys Edit: if I think Java + SQL + HTML&JavaScript is the way to go, does the order I'm learning them in matter? Or can I try to learn them all at once?

    Read the article

  • Varnish doesn't seem to be caching

    - by Charlie Somerville
    I've setup a Varnish cache mirror to sit in front of a file server, but it seems to be endlessly re-downloading data from my file server. There's about 100GB of data in total, but so far Varnish has downloaded 800GB from my file server. I'm using the default VCL file that comes with Varnish and the response headers for files served by the file server are similar to the following: HTTP/1.1 200 OK Cache-Control: max-age=290304000, public Content-Type: image/jpeg Expires: Wed, 29 Dec 2010 21:38:33 GMT Server: Microsoft-IIS/7.0 E-Tag: "8b4723296ab697530768f18b1378b269" Content-Disposition: inline; filename=image046.jpg; X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Thu, 23 Dec 2010 05:38:33 GMT Content-Length: 100592 I'm starting varnishd with the following options: varnish/sbin/varnishd -a 0.0.0.0:80 -f varnish/etc/varnish/default.vcl -s file,varnish/var/lib/varnish/varnish_storage.bin,100G

    Read the article

  • Java Embedded @ JavaOne Call for Papers

    - by arungupta
    Do you care about Internet of Things ? Interested in sharing your experience at javaOne about how you are using Java Embedded Technology to realize this vision ? At Java Embedded @ JavaOne, C-level executives, architects, business leaders, and decision makers from around the globe will come together to learn how Java Embedded technologies and solutions offer compelling value and a clear path forward to business efficiency and agility. The conference will feature dedicated business-focused content from Oracle discussing how Java Embedded delivers a secure, optimized environment ideal for multiple network-based devices, as well as meaningful industry-focused sessions from peers who are already successfully utilizing Java Embedded. Submit your papers for Business Track or Technical Content related to Embedded Java to be presented at JavaOne here. Speakers for accepted sessions will receive a complimentary pass to the event for which their session is submitted. Note, the CFP for the main JavaOne conference is over, speakers notified, and content catalog published. This is CFP only for Java Embedded @ JavaOne. Some key dates are: Jul 8th: Call for Papers closes Week of Jul 29th: Notifications sent Conference Dates: Oct 3, 4, 2012 And the main conference website is oracle.com/javaone/embedded.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Best practices for setting lm-factor in Squid refresh patterns

    - by Mpentecost
    I am running a Squid (3.1) cache in front of Django. The content of the site does not change very often, so Squid gives our backend much needed breathing room. Currently, this is the refresh pattern that we are using to cache the content: refresh_pattern . 60 100% 60 We basically want to cache everything for at least an hour (and only an hour) before Squid then re-validates the content. My question is on the "100%" parameter, which sets the lm-factor. I'm not sure if setting that to 100% is doing what we want it to. The assumption was that by setting it to 100%, it would ensure that objects stay in the cache for the max cache time. Is this an incorrect assumption? What are the best practices that one should follow when setting up a refresh pattern like this?

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Can't get MultiViews to work on Apache 2.2 - negotiation problem

    - by Doe
    Hi I can't get MultiViews to set up properly on my Apache 2.2. When I go to filtered.com/something, I expect it to execute something.pl but it doesn't. I get a Error 404 page. In my error logs it says: " [Fri Apr 16 13:04:20 2010] [error] [client 78.85.152.94] Negotiation: discovere\ d file(s) matching request: /var/www/html/filtered.net/translate-english (None could be negotiated)., referer: http://filtered.net/" Would anyone kindly help me so that MultiViews is properly installed on my server? ServerAdmin [email protected] ServerAlias *.filtered.net DocumentRoot /var/www/html/filtered.net ServerName filtered.net ErrorLog logs/filtered.net-error_log CustomLog logs/filtered.net-access_log common Options ExecCGI +Indexes +IncludesNoExec +MultiViews +ExecCGI AllowOverride None Order allow,deny Allow from all <IfModule mod_dir.c> DirectoryIndex index.php index.html index.pl </IfModule> </Directory> </VirtualHost>

    Read the article

  • Creating IIS Rewrite Rules

    - by Tom Bell
    I'm having a hard time converting old .htaccess rewrite rules to new IIS ones so I was wondering if anyone could point me in the right direction. Below are some example URLs I would like rewriting. http://example.org.uk/about/ Rewrites to http://example.org.uk/about/about.html ----------- http://example.org.uk/blog/events/ Rewrites to http://example.org.uk/blog/events.html ----------- http://example.org.uk/blog/2010/11/foo-bar Rewrites to http://example.org.uk/blog/2010/11/foo-bar.html The directories and file names are generic and could be anything. Any help would be greatly appreciated.

    Read the article

  • RewriteRule Works With "Match Everything" Pattern But Not Directory Pattern

    - by kgrote
    I'm trying to redirect newsletter URLs from my local server to an Amazon S3 bucket. So I want to redirect from: https://mysite.com/assets/img/newsletter/Jan12_Newsletter.html to: https://s3.amazonaws.com/mybucket/newsletters/legacy/Jan12_Newsletter.html Here's the first part of my rule: RewriteEngine On RewriteBase / # Is it in the newsletters directory RewriteCond %{REQUEST_URI} ^(/assets/img/newsletter/)(.+) [NC] # Is not a 2008-2011 newsletter RewriteCond %{REQUEST_URI} !(.+)(11|10|09|08)_Newsletter.html$ [NC] ## -> RewriteRule to S3 Here <- ## If I use this RewriteRule to point to the new subdirectory on S3 it will NOT redirect: RewriteRule ^(/assets/img/newsletter/)(.+) https://s3.amazonaws.com/mybucket/newsletters/legacy/$2 [R=301,L] However if I use a blanket expression to capture the entire file path it WILL redirect: RewriteRule ^(.*)$ https://s3.amazonaws.com/mybucket/newsletters/legacy/$1 [R=301,L] Why does it only work with a "match everything" expression but not a more specific expression?

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • Set nginx.conf to deny all connections except to certain files or directories

    - by Ben
    I am trying to set up Nginx so that all connections to my numeric ip are denied, with the exception of a few arbitrary directories and files. So if someone goes to my IP, they are allowed to access the index.php file, and the phpmyadmin directory for example, but should they try to access any other directories, they will be denied. This is my server block from nginx.conf: server { listen 80; server_name localhost; location / { root html; index index.html index.htm index.php; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /srv/http/nginx/$fastcgi_script_name; include fastcgi_params; } } How would I proceed? Thanks very much!

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • is there a way to automate changing filenames in <link> , <script> tags

    - by nepsdotin
    when we use Expires header for text files like js, css, contents are cached in the browser, to get new content we need to change in the html file the new names in the link and script tag. When we add changes. How can we automate it. I may have some bunch of html files in multiple folders also in subdirectories. There would be a text file filelist.txt OldName NewName oldfile1-ver-1.0.js oldfile1-ver-2.0.js oldfile2-ver-1.0.js oldfile2-ver-2.0.js oldfile3-ver-1.0.js oldfile3-ver-2.0.js oldfile4-ver-1.0.js oldfile4-ver-2.0.js The script should change all the oldfile1-ver-1.0.js into oldfile1-ver-2.0.js in the html, php files I would run this script before i start uploading. Finally the script could create a list of files and line number where it made the update. The solution can be in PERL/PHP/BATCH or anything thats nice and elegant

    Read the article

  • Meta refresh tag not working in (my) firefox?

    - by mplungjan
    Code like on this page does not work in (my) Firefox 3.6 and also not in Fx4 (WinXPsp3) Works in IE8, Safari 5, Opera 11, Mozilla 1.7, Chrome 9 <meta http-equiv=refresh content="12; URL=meta2.htm"> <meta http-equiv="refresh" content="1; URL=http://fully_qualified_url.com/page2.html"> are completely ignored Not that I use such back-button killing things, but a LOT of sites do, possibly including my linux apache it seems when it wants to show a 503 error page... If I firebug or look at generated content, I do not see the refresh tag changed in any way so I am really curious what kind of plugin/addon could block me which is why I googled (in vain) for a known bug... In about:config I have accessibility.blockautorefresh; false so that is not it. I ran in safe mode and OH MY GOD, STACKEXCHANGE IS FULL OF ADS but no redirect

    Read the article

  • Nginx rewrite rules, some work, some don't

    - by Lawrence Goldstien
    Here are the two rewrite rules: This one works rewrite ^/knowledgebase/([0-9]+)/[a-z0-9_-]+.html$ /./knowledgebase.php?action=displayarticle&id=$1 last; This one doesn't rewrite ^/announcements/([0-9]+)/[a-z0-9_-]+.html$ /./announcements.php?id=$1 last; There is no difference between the two as far as I can see. The url to be rewritten for announcements is: /announcements/2/New-Site-Design.html And should be rewritten to: /announcements.php?id=2 I really can't see how the announcements one doesn't work compared to the knowledgebase one. Any tips would be greatly appreciated.

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >