Search Results

Search found 5028 results on 202 pages for 'india seo analyst'.

Page 86/202 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Recovery from URL structure change?

    - by Dejan Pelzel
    in July this year, we have changed the URL structure of the website from: Post: domain.com/blog/post/986/dance/heart-beats-dance-video-by-chinatsu/ Category: domain.com/blog/index/cosplay/ to Post: domain.com/dance/heart-beats-dance-video-by-chinatsu-986/ Category: domain.com/cosplay/ Everything was (supposedly) properly redirected with 301 redirects and it first seemed that the traffic returned after a couple of days, but it has now been close to 2 months and things keep going worse although Google is slowly indexing the changes. What is worrying me even more is that the Pages crawled per day from Webmaster Tools started drastically dropping a few days ago and has just reached a new low in months (from over 2000 to 700). Should I be worried or will things sort out eventually?

    Read the article

  • adding noindex on pagination

    - by Damodar Bashyal
    I find few conflicts on people's reactions about adding noindex on paginations. What does pro webmasters has to say about this? I am planning to add noindex meta for all paginations with a hope to increase website value, so I would like some pro's feedback on this. e.g. here: http://w3tut.org/blog 3 posts' first few paragraphs are displayed and meta is taken from first post from that page, which will cause duplicate meta issue. Also, 3 posts in a page could be unrelated to each other as well. Is it a good idea to add noindex for these pages, so full article posts get more value?

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • How to utilise a newly acquired keyword domain to contribute to an already existing healthy website?

    - by vDog
    My client's website has just reached spot 1 for the most valuable keyword. We acquired the domain that was at #1 spot. It's a keyword domain (targetkeyword.tld). Just wondering what would be the best way to make use of it. A permanent redirect or a single page that hyperlinks to the brand website? Should I be concerned about anything negative associated to this keyword domain (poor backlinks and the fact that this website was down for about one month)?

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • Do image backlinks count as backlinks?

    - by sam
    If i have lots of images appearing tumblr blogs, the sort of tumblogs with very little text just reams and reams of images for people to browse through (example - http://whereisthecool.com/). If my image is embeded in their site like this : <a href="http://mysite.com" target="blank"> <img src="cutecatblog.com/cat.jpg" alt="cute cat"/> </a> so the image was a link back to my site. Although there is no anchor text to speak of does google take into account the alt text of the image ? Would this still count in googles eyes as a backlink ?

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • Geotargeted subfolder questions (Portugal/Brazil and Switzerland)

    - by Lucy
    We are at the beginning of the process to get multilingual versions of a website. We will be using subfolders working off the core domain (eg mydomain.com/fr/), set the geotargeting at webmaster tools and set hreflang attribute. I would really appreciate your help with a couple of questions. 1/Portuguese: we will have a Portuguese language version of the site. Our intention is to use this to cover users in both Portugal AND Brazil. ie, we are not going to do separate folders mydomain.com/pt/ and mydomain.com/br/ Can I use 2 hreflang attributes for this language version to tell Google it covers Brazil AND Portugal? What country code to use for this subfolder? 2/Switzerland Does anyone have best practice advice how to do this? One one hand, the subfolder should be mydomain.com/ch/ but as Switzerland covers 2 language possibilities (French AND German) - what to do? thanks

    Read the article

  • How to remove HTML code from search result page content

    - by Jack Torris
    I have music website. There are 46 album pages and each page has different player and files. I just entered the one of album's URLs in a search engine. I found that Google is displaying player code in search result content. For example, enter this URL in Google and check the results. Each result displays a .mp3 file in content section. I see this: This page contains a demo of and documentation for the new jPlayer Playlist add-on, ... mp3:"http://www.jplayer.org/audio/mp3/Miaow-01-Tempered-song.mp3", ... I don't want Google to show the player code and mp3 files in search result. How can I hide audio files and player code from search engine? What would be the best solution for it?

    Read the article

  • What is the best approach to copy public dynamic pages?

    - by Renan
    Situation: the government is supposed to publish official information online such as acts and laws. Problem: they're using 90s expertise to do it. You can tell that by the constant use of deprecated html tags such as <table and the lack of any compression at all, which makes some documents go way over 700,000 bytes even though they're pure text. Side problem: some companies are actually editing and selling this content that should be public and free. What I need to know is the best approach to offer said official content in my own site for free. I've thought of setting up a mirror to copy the official pages from time to time, since some of them are updated frequently, which would automatically be compressed as all my pages are via htaccess.

    Read the article

  • why some websites changes their short and user friendly URL to long URL?

    - by diEcho
    Hello All, i wonder why some website changes their short and user friendly url to long url like cricinfo.com ---- espncricinfo.com indiafm.com --- bollywoodhungama.com and many others i have seen i just want to know that what is the exact need of doing that?? is there economical reason or what??i think user dont like to write long website name still i also type indiafm.com and browser automatically redirect the URL. (sorry if tags are wrong) Thanks,

    Read the article

  • Google Webmaster tools Incorrect rel-alternate-hreflang implementation warning message

    - by Noam
    I'm getting this warning msg. in Google webmaster tools Incorrect rel-alternate-hreflang implementation In particular, there seems to be a problem with missing or incorrect bi-directional linking (when page A links with hreflang to page B, there must be a link back from B to A as well). This msg. seems pretty straight forward, but when checking their example pages, I'm not finding anything wrong. I'm using alternate for translation of main site menu, titles, etc.. In each page I have this: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> <link rel="alternate" hreflang="th" href="http://th.mydomain.com/page" /> <link rel="alternate" hreflang="es" href="http://es.mydomain.com/page" /> <link rel="alternate" hreflang="pt" href="http://pt.mydomain.com/page" /> I've double checked this exists in all the 6 pages. This is the first time I've seen this msg although I've implemented this at least 6 months ago, and implementation hasn't changed. Is there any way to check a specific set of pages for these things? Am I missing something in my implementation? We're auto-redirecting people from a location to their specific language, and give them an option to manually change this. I've also just found out about the suggestion for Vary HTTP header - is that relevant and important here?

    Read the article

  • Best way to prevent Google from indexing a directory [duplicate]

    - by Gkhan14
    This question already has an answer here: Stopping Google index some web pages I have 5 answers I've researched many methods on how to prevent Google/other search engines from crawling a specific directory. The two most popular ones I've seen are: Adding it into the robots.txt file: Disallow: /directory/ Adding a meta tag: <meta name="robots" content="noindex, nofollow"> Which method would work the best? I want this directory to remain "invisible" from search engines so it does not affect any of my site's ranking. In other words, I want this directory to be neutral/invisible and "just there." I don't want it to affect any ranking. Which method would be the best to achieve this?

    Read the article

  • Removing existing filtered pages from Google's index: noindex / 301 / canonical to non-filtered page?

    - by Noam
    I've decided to remove some of my site's pages from the Google index to focus more of the indexed pages on higher quality pages. The pages I'm going to remove are already in the index. These removed pages are filtered pages which will continue to exist, I just don't want them in the google index because they add little quality to the same page without any filter selected. I've added in webmaster tools specification of narrow for the parameters that set these filters, but it doesn't seem this changes anything in how he handles these pages. So I'm considering three options: Adding <meta name="robots" content="noindex" /> to the html header of these filtered pages 301 to the non-filtered page that contains the most similar information and will remain in the index Canonical tag. Which I'm not sure is exactly the mainstream use case, as these aren't really the same pages. Which should I use?

    Read the article

  • Do search engines directly penalize bad grammar?

    - by Nicolas Raoul
    Let's say I have a web page with user-contributed content, which is good content but with bad grammar, slang terms, inappropriate tone. I know that bad grammar is a also a problem because it drives away visitors and scares people from linking to it, but let's put that aside. Let's also put aside the fact that incorrectly spelt terms might be ignored by a crawler, potentially leading to less text-comparizon hits. QUESTION: Do search engines like Google directly recognize and penalize bad grammar? For instance because they might consider bad-grammar as a sign of low-quality content.

    Read the article

  • Using hreflang to specify a catchall language

    - by adam
    We have a site primarily targeted at the UK market, and are adding a US-market alternative. As per Google's recommendations: To indicate to Google that you want the German version of the page to be served to searchers using Google in German, the en-us version to searchers using google.com in English, and the en-gb version to searchers using google.co.uk in English, use rel="alternate" hreflang="x" to identify alternate language versions. Which gives us: <link rel="alternate" hreflang="en-gb" href="http://www.example.com/page.html" /> <link rel="alternate" hreflang="en-us" href="http://www.example.com/us/page.html" /> We do get enquiries from other areas of the world - particularly where there are expat communities (Dubai, UAE, Portugal etc). By adding the above tags, is there a risk that Google will only surface our site for UK and US search users? Do we need to specify a catch-all that will default all other searches to our UK site?

    Read the article

  • How to Avoid Duplicate Content in Wordpress Ecommerce Store

    - by Bhanuprakash Moturu
    hi i run a word press eCommerce store powered by woo commerce . i have a large inventory of products most of the product description is same for all products and its mandatory to include it. its creating a large duplicate content on site each category have 6 products i thought of a solution can you suggest which one is good 1 no index and follow product page and link it to categories page using canonical tag 2 index and nofollow product page and link it to categories page using canonical tag which is the best solution and is it a good practice to use canonical tag to link to categories page

    Read the article

  • Proper way to create and work with a subdomain?

    - by Genadinik
    My site got effected by Panda, and I am trying to see if making a subdomain would work. The site is comehike.com, and I created a subdomain which is currently empty at hiking.comehike.com I have a directory /outdoors that has some high quality hand-written articles. I want to put those into the new subdomain to see what would happen. My questions are: Should I just copy and paste the files for those pages into the new subdomain's folder, and just change all the links in all my pages from the original domain to the new subdomain? Should I just do a 301 redirect to the new subdomain? Since test.site.com and www.site.com are different domains, will the new page have to start from scratch in terms of Pagerank, and its rankings in the SERPs?

    Read the article

  • How to set default hreflangs for some languages?

    - by user1721135
    I want to make a site with different versions for 2 countries, which have the same language. Then I need to do the same for another language. Basically I want to have 6 versions of the site: UK English US English Default English ?? Austrian German Germany German Default German The question is, how do I define the "default" language versions, for any country with this language which isnt defined already? I know there is x-default, but I think you can only use that once and it is for all languages and all countries.

    Read the article

  • How does bing-bot( is that the right spider-name? ) and googlebot interpret 301 redirect?

    - by jbcurtin
    I've been looking for documentation on how the Microsoft and Google bots interpret 301 redirects. It seems that google-bot stores documents on a url based index system. But I haven't been able to figure out how bing works. Should I assume that they are still working towards coping everyone else and assume they use an algorithm close to google? Is it best to just forward a page to a new location via Javascript? I think this might be a blackhat trick, but how would I tell the bots that it's not? Is 301 redirect my best option and I just have to bit the bullet because said pages are no longer in existence? What other options do I have that I might not be aware of?

    Read the article

  • Why old (301) links stay on Google when breaking site down to multiple domains

    - by Sampo Sarrala
    Some background: We did have single site and single domain (let's call it mainsite.com) with product information, however things have changed since and product database has grown fast. So we decided to move some major products/manufacturers under their own domains (let's call one of them subsite.com) while still using our main database/codebase. What we've done: Added subsite.com domain for product 1 by Great Products Co. Some new nice looking front pages, info pages, etc. Detail pages that will use information from original db. Redirected product/group links from mainsite.com using 301 redirect. Verified that redirects works as expected. Waited some time for Google reindexing (over 30 days, I've heard it should be more than enough). Results: If I search our moved products from Google then it will found them and list them but with old links to our main page like mainsite.com/group/product1 but it should show link to new site subsite.com/product1. Links from Goole redirects as they should, as said redirects are verified [301]. Main question: Any reasons why Google would not follow 301 redirects and update links so that they will point to our new mfg/product site subsite.com?

    Read the article

  • Better ways to have valuable data indexed, which is ignored currently

    - by Sam
    <a title="">.../a> Hi folks. It seems that my title tag which holds extremely valuable and describes contents on my simple design page is currently compeltely denied by search engines and not indexed at all!! Those descriptions should however be indexed as the describe valuable portions to an otherwise empty page with clean glossary (thats neat and organised to the eye of the viewer. So putting all that descriptive data into visible space would ruin the designish less is more fundamental... So, which alternatives to the title tag do I have, in order to put important contents that are relevant for both user as well as search engines? A <a name="">......</> B <p name="">......</> C <a alt="">.......</> D <p alt="">.......</> From the above list, arose my question: Which of the above is advisable alternative in order to get the valuable actual content indexed? Should it be in a a tag or p tag? Or are there even better tags for this which still keep layout clean? You suggestions are Much appreciated!

    Read the article

  • How to write good blog post tags

    - by keruilin
    It seems that you have three choices in deciding how you write tags for your blog posts: Make them user friendly Make them highly searchable Combo of the two For example, let's say that I have a blog post that has write-ups on the top 10 ipad apps for business travel (e.g., Evernote, Dragon Diction, Instapaper, etc.). User friendly tags: ipad apps, business travel Searchable keywords (analyzed with Google Keyword Analyzer): ipad apps, ipad travel apps, evernote ipad, instapaper, instapaper ipad Combo: ipad apps, ipad travel apps So my question comes down to this: which is really the best choice -- 1, 2 or 3? Note: this visible post tags will also serve as the meta keywords for the post page.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >