Search Results

Search found 806 results on 33 pages for 'ouhsd webmaster'.

Page 7/33 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Move site from one tld to another

    - by Amol Ghotankar
    If we want to move site from say xyz.com to xyz.org. What all things we need to do to make sure seo works fine. I am doing something like Point both xyz.com and xyz.org to same ip where my site is working Use cannonical url to have xyz.org/* instead of xyz.com/* Add site to webmaster and make a change request. But problem is we are not able to 301 redirect from xyz.com to xyz.org as both are on same i/p and doing so is causing redirect loop and error. How to fix this? Please help.

    Read the article

  • Why the amount of 'indexed' images can go down?

    - by Roman Matveev
    I have a site with several thousand of images. All those images included into the sitemap submitted to Google Webmaster Tools. The amount of 'submitted' images is OK, but the amount of 'indexed' is significantly lower than the amount of 'submitted' and it is going DOWN! I'd understand if not all of my images got indexed (however it is also not clear and very frustrating for me) but I can not understand how the indexing can go in the negative direction?! All the images stays on their places. And pages containing them stays unchanged. At least they intended to be. Any thoughts?

    Read the article

  • HTML 5, Fluid Pages and Google Mobile Index

    - by Bob
    I am currently migrating my site to HTML5, at the same time designing pages so that they are "fluid" and are equally presentable for a mobile or a large screen. I took the fluid approach so as not to have to develop a separate application for mobile devices and I'm pleasantly surprised with the results that look equally as good on an iPhone as they do on a large screen. Then I went into the Google Webmaster Tools facility and became aware of the Google Mobile Index. I'm confused now as HTML5 doesn't seem to be supported by Google Mobile Indexing. Does this mean that when I go live with my new "pride and joy" HTML5 site on a mobile it won't appear on any Google searches as it's not in the Google Mobile Index?

    Read the article

  • Will Tracking Subdomains as Single Entity with Google Analytics Help SEO? [closed]

    - by Sam Gridley
    Possible Duplicate: Does Google Analytics data affect SEO? We have two subdomains, one for our blog and one for our ecommerce store. The blog serves to bring traffic and the store is how we monetize the site. We have them designed to appear as one large site, but I know google sees them as two sites. Here is how the subdomains look: www.example.com (store) blog.example.com (blog) I believe I can configure analytics to use subdomain tracking as explained here: http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=55524 But my question is whether this will cause google to see our 2 subdomains as one larger domain for SEO purposes. In other words, is there any relationship to how you configure google analytics and how google indexes and ranks your website(s) and pages? Is there anything I need to do in anaytics or webmaster tools to make google aware that these two subdomains work together as one website? Thanks! Sam

    Read the article

  • Google Webmasters tools crawl error caused by URL split into two lines

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • Google displaying swf menu in SiteLinks

    - by m90
    I have a website that uses a Flash based menu as its main navigation. A plain HTML fallback version is "lying underneath", the swf is embedded using swfobject. swfobject.embedSWF('MENU.swf', 'menu', '1000', '600', '8.0.0', 'ext/expressInstall.swf', {}, {wmode:'transparent',bgcolor:'#666666'}, {}); Somehow Google now started displaying a link to the swf-file in the SiteLinks (noting [SWF] beforehand) which is pretty ugly as the Flash content gets all scrambled and all you see is a random string of characters and numbers (it looks "hacked" to me, although I do know it is not). Also, the link to the swf is plain useless as it relies on JavaScript-functions in the HTML-document. I already demoted the swf in the Webmaster Tools, yet in some situations the link will still show up. Is anyone aware of this problem (I haven't found too much on this on the Internet) and knows how I can keep the search results from linking to the swf?

    Read the article

  • Does Submit to Index on a page with new content update Content Keywords for the site?

    - by Dan Kanze
    Using Google Webmaster Tools I'm trying to update the Content Keywords of my site. I'm confused about the relationship between Submit to Index and Content Keywords Does Fetch as Google -- Submit to Index on a previously existing indexed page containing new content expidite updating the Content Keywords crawled by the real Google bot? Does Submit to Index only submit new URL's so that previously indexed URL's still point to the older cached version until Google crawls specifically for new content on its own? Does Submit to Index have anything to do with Content Keywords or crawling new content being a previously indexed page or never been indexed page?

    Read the article

  • Remove third/nth level domains from google Index

    - by drakythe
    Somehow google has indexed some third(and fourth!) level domains that I had attached to my server temporarily, eg. my.domain.root.com. I now have these redirected properly where I would like them to go, however with a carefully crafted search one can still find them and I'd rather they not be exposed. My google foo skills have failed me in finding an answer, so I come to you wonderful folks: Is there a way/How do I remove sub-level domains from google search results? I have the site in google webmaster tools and verified, but all the URL removal requests I can perform append the url to the base url, not prefixed. And finally, how can I prevent this in the future?

    Read the article

  • How do I stop Google indexing my main page as https [duplicate]

    - by user2897488
    This question already has an answer here: https:// search results appearing on Google for purely http:// site 2 answers Due to historic reasons, we have things set up so that "www.mydomain.com" redirects to "store.mydomain.com". This has worked perfectly fine until recently, when Google appears to be sending visitors to "https:// www.mydomain.com" which doesn't have an SSL-certificate (and never has). Strangely, its only the first link that goes to "https:// www.mydomain.com", all other links point correctly to "http:// store.mydomain.com". Because there is no certificate on the "www" version, users are getting an error message. How do I make Google revert to pointing the main link at "http:// store.mydomain.com" (or even "http:// www.mydomain.com.") If I remove "https:// www.mydomain.com" from Google webmaster tools, will this also remove the redirected page ("http:// store.mydomain.com)? Thanks.

    Read the article

  • Joomla url issue with sh404SEF

    - by user5858
    it's couple of months I've been using SH404SEF With my site. But in my site I'm getting url's in the form: http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view If I remove this suffix(?task=view), it takes us to the same page. I had raised this issue in SH404SEF forum, and I was told that this data is taken as parameter by search engines hence ignored. I want to redirect using RewriteMatch in .htaccess all such url's to the url's without ?task=view ones : ....downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view to be redirected to http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html So my question is: Will this redirection create 404's in the Google webmaster. I've thousand's of pages in the site

    Read the article

  • How to recover organic position in Google results after server down?

    - by ElHaix
    I have several sites that were doing quite well in terms of organic SEO rankings. I have the important sites setup in Google's Webmaster tools. Long story short, the system was down for about two weeks. Now in AdSense and Analytics, I am seeing that the page views are SLOWLY increasing. and I would like to know if there is anything I can do now to try to expedite the process of regaining those positions. Since there were several errors from that server, is it possible that Google will now rank any site from that IP address lower due to those two weeks of errors? Is this something that I just have to let ride out? Thanks.

    Read the article

  • Why am I getting this message "Some important page is blocked by robots.txt"?

    - by Rounak
    My site's URL is: www.hackinguniverse.org From some day in Google Webmaster tool a message is showing that says "Some important page is blocked by robots.txt". My robots.txt is: User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: http://www.hackinguniverse.org/feeds/posts/default?orderby=updated Now for other information - I hosted this website on blogger. I have some other sitemaps too where I had only included "/Search" in Disallow like in this robots.txt file. But those sites are ok. I mean no message is showing on those. So why am I getting that message telling that I have blocked some important page via robot.txt?

    Read the article

  • When Google gives up recrawling 301 that led to 404?

    - by Easy Life
    I've transferred a domain and made a mistake in the redirects (the URL structure is identical). Even though they went to the new domain, the error caused a 404 when crawled by Google bot. 10 days after I saw and corrected my redirect mistake, and now the site should (hopefully) redirect to proper pages. Q1: The URLs of the 404 pages in the Webmaster Tools all bear the mistake and will never be available at the new site. I marked them as fixed in the tools. Do I need to do something about that, like 301 rewrite them with a condition to fix the error? Q2: Does Google bot attempt to recrawl 301 pages that pointed to a 404?

    Read the article

  • Remove home page from Google cache

    - by Steve
    The Google Webmaster Tools Remove URL section allows you to specify a page URL to be removed from the index, or cache, or both. However, I want to remove just the home page, which is / I want to remove it from the cache because it is indexed when the "under construction" page was up. This URL is not recognised by the Remove URL section as an individual page. Instead Google assumes you want to remove the entire website from the index. I've specified /index.php and /index.html to be removed from the cache, but this is not the URL listed in the search results for the home page I want removed from the cache.

    Read the article

  • Weird URLs being access by Googlebot

    - by Avishai
    Lately I've been seing all sorts of strange URLs show up as errors in my Webmaster tools account, but they're URLs that don't actually exist on my site, nor are linked from the pages that Google claims they're linked from. URL Response Code Detected yR3kna/5RfA4+ndtn/X4zcevudMlXbqbIrnPbH9irw= 404 9/16/12 OK4iaOVdr6Ocjmz+u1kuR5Q486mhDo/e45nwjl2+y8= 404 9/9/12 pxGz/oHEA0BS8U3VFBzJcZnnIHMsFXb3/rIxMxh2ws= 404 9/16/12 Af8tbvQ0HniIpf53I8Txz1hM1/JxxrFQxgqPuErWII= 404 9/9/12 7Bk7c0LDmm4PHyTjml017EGwNNPCn/p/0xMSWWPDic= 404 9/16/12 umCwnDvTE8ybpUB19MIb+VRj5xRJncyYGGfAQ2Mxn0= 404 9/1/12 # etc... Do you know how to make these stop? It's not at all clear to me why it would be going to these URLs in the first place.

    Read the article

  • Googlebot cant access my site webmaster tools reply Unreachable robots.txt

    - by Ahmad Ahmadi
    When I try to fetch my site as a googlebot in webmaster tools it return Unreachable robots.txt, after investigate I understood google bot can see my server: tcpdump | grep google it return that google can access my server with IP 66.249.81.172 or 66.249.75.111. but there is not any think in access log or error log or other apache logs. cat access_log | grep google or cat error_log | grep 66.249.81.172 Other bot (bing,...) can access apache but google cant. there is not any problem in my robots.txt or its permissions because as you know robots.txt is not necessary so I delete it but again webmaster tools returned Unreachable robots.txt not 404 not found! information about server: Server OS : CentOS 6 Web Server : Apache 2.x Firewall : IPTables is stoped SELinux is Disabled There is not any think else for security on my server. how can I investigate the problem and is there any other command that can help me to find the problem.

    Read the article

  • Bingbot seems to be adding "ForceRecrawl: 0" to URLs when crawling my sites

    - by Louis Somers
    I'm seeing this in the iis-logs of two websites that I maintain: GET /an/existing/page/on/my/site+ForceRecrawl:+0 - 80 - 207.46.195.105 HTTP/1.1 Mozilla/5.0+(compatible;+bingbot/2.0;++http://www.bing.com/bingbot.htm) I get about one or two of these per day from these IP addresses: 207.46.195.105, 65.52.110.190.. an more, all belonging to msnbot-ip.search.msn.com Probably Microsoft has a bug in their crawler? Any way, doing a search on "ForceRecrawl: 0" in major search engines comes up with a bunch of random sites. Doing the search on StackOverflow or here gave no results (to my amazement). Am I the only one seeing this? I first noticed these on the 9th of this month, and I'm seeing them pass almost daily since... Another thing that I think is crazy, is that the URL http://www.bing.com/bingbot.htm redirects to mail.live.com (hotmail). Currently I'm returning 404's but I'm considering to catch these, strip the trailing " ForceRecrawl: 0" and process as if it were a legitimate url. Could anyone shed some light on this? Could it have to do with some configuration or so in Bing's Webmaster Tools?

    Read the article

  • Google Webmasters Tools strange 404 errors referred from same site

    - by Out of Control
    Starting about a month ago, I noticed a sudden increase in 404 errors in Webmasters Tools for one of my sites (over 1400 errors so far). All the errors are being referred from my own site to non existent pages. The 404 error URLs are all of the same format: URL: http://www.helloneighbour.com/save/1347208508000 The number on the end appears to be a timestamp followed by 3 zeros. The referring page, in this case is : Linked from http://www.helloneighbour.com/save/cmw-insurance-insurance-burnaby When I look at the source code of that page, or I use Webmaster tools to view the page as Google sees it, I can't find any link that comes close to what is above. I built the site, and I can't find any place that might be causing these false links either. The server logs (access and error) don't show Google or anyone else trying to access these links. I've marked all these pages as fixed, and waited a couple of weeks, only to find the errors come back again over the last few days. I'm wondering if anyone else has seen anything strange like this, or if someone might have a way for me to debug, replicate this error myself.

    Read the article

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Why google return soft 404 when I redirect on signup page?

    - by Hettomei
    Since one month, I've got an increased "soft 404" reported by google webmaster tools but work well for users. I made some fix but can't figure out how to solve it. Configuration (maybe useless): I have a website built with rails 3.1 Authentication is handled by the gem Devise Problem: On this page http://en.bemyboat.com/yacht-charter/9965-sailboat-beneteau-oceanis-43 when you click on "Ask a Boat request" (a simple form, in GET to : http://en.bemyboat.com/boat_requests/new/9965) you are redirected with the http status 302 to sign in, and then sent back to the new page if successfully sign in. Google tells me that the link on "ask a boat request" returns a soft 404. I can't make this form in "POST" (which will solve the problem) because we need to automatically redirect user to the good page after sign in. (the Gem Devise memorize the "get" link) To simplify, the question is: how to protect a private page with authentication, reached with a simple "get" and not to be penalized by google with "soft 404". Thank you. PS : this website suffer a lot about english translation... please don't care.

    Read the article

  • Sudden drop in Total Indexed pages and increase in 'Not Selected' number.

    - by Pravin
    My blog is around 1 year old and have PR2. The average daily pageviews upto last 1 week were 1800. The total number of posts are 180. Though I have only 180 total posts, the total number of Indexed URL was increasing and it was as high as 510. But in the month of Sept2012, the total number of Indexed pages dropped from 510 to 214. The drop was sudden and it is now increasing very slowly. Also, the other main concern is huge increase in 'Not Selected' number. It is currently 814. I have never posted any post again and never copied any idea from any other blog. But I do use internal linking to some older post those are related to the new posts. The questions are:; Why there is sudden drop in the 'Total Indexed' pages. Why there was increase in total indexed pages to 500 even though the total posts were only 180. As the drop in 'Total Indexed' was in the month of sept2012, I was getting same organic traffic and it was steadily increasing till last week and then there was a 50 drop in the total pageviews. Why. Now, again the traffic is becoming to normal but still there is a problem. Is increase in the 'Not selected' number is a cause of drop in 'Total Indexed'? How to prevent or reduce the number of 'Not Selected' even though I do not have any duplicate post withing blog. Is the 'internal linking' to older post creating 'Not selected' problem? Should I edit my 'Robot.txt' to avoid crawling of labes that may be creating duplicate posts or something like that, if so, what is correct robot.txt. I have uploaded the screenshot of the graph of Webmaster Tools. Please take a look and give suggestions. Please help. Thank you in advance.

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >