Search Results

Search found 4080 results on 164 pages for '2stroke seo'.

Page 90/164 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Google is re-indexing pages after redirecting URLs from HTTP to HTTPS incorrectly

    - by SLIM
    I upgraded my site so that all pages have gone from using HTTP to HTTPS. I didn't consider that Google treats HTTPS pages differently than HTTP. I recreated my sitemap to so that all links now reflect the new HTTPS URLs and let it be for a few days. (Whoops!) Google is now re-indexing all the HTTPS pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new HTTPS pages. The problem is that Google sees all of these as brand new pages when many of them have a long HTTP history. Of course most of you will recognize the problem, I didn't set up a 301 from the old HTTP to the new HTTPS URLs. Is it too late to do this? Should I switch my sitemap back to HTTP URLs and then 301 redirect to the new HTTPS URls? Or should I leave the sitemap as is, and setup 301 redirects anyway... I'm not even sure if Google is trying to reach the HTTP site anymore. Currently the site is doing 303 redirects (from HTTP to HTTPS), although I haven't figured out why yet.

    Read the article

  • PHP Calculating Text to Content Ratio

    - by James
    I am using the following code to calculate text to code ratio. I think it is crazy that no one can agree on how to properly calculate the result. I am looking any suggestions or ideas to improve this code that may make it more accurate. <?php // Returns the size of the content in bytes function findKb($content){ $count=0; $order = array("\r\n", "\n", "\r", "chr(13)", "\t", "\0", "\x0B"); $content = str_replace($order, "12", $content); for ($index = 0; $index < strlen($content); $index ++){ $byte = ord($content[$index]); if ($byte <= 127) { $count++; } else if ($byte >= 194 && $byte <= 223) { $count=$count+2; } else if ($byte >= 224 && $byte <= 239) { $count=$count+3; } else if ($byte >= 240 && $byte <= 244) { $count=$count+4; } } return $count; } // Collect size of entire code $filesize = findKb($content); // Remove anything within script tags $code = preg_replace("@<script[^>]*>.+</script[^>]*>@i", "", $content); // Remove anything within style tags $code = preg_replace("@<style[^>]*>.+</style[^>]*>@i", "", $content); // Remove all tags from the system $code = strip_tags($code); // Remove Extra whitespace from the content $code = preg_replace( '/\s+/', ' ', $code ); // Find the size of the remaining code $codesize = findKb($code); // Calculate Percentage $percent = $codesize/$filesize; $percentage = $percent*100; echo $percentage; ?> I don't know the exact calculations that are used so this function is just my guess. Does anyone know what the proper calculations are or if my functions are close enough for a good judgement.

    Read the article

  • Google new algorithm: My company have a 40 sites with different domains that some of their articles appears in my main website

    - by user5674576
    Hi, My company have a 40 sites with different domains that some of their articles appears in my main website with reference to their source. Our articles write by high level processionals in the field that they write about - we also pay them high salary. In recent google algorithm change my main site rating down very seriously. What should we do to restore company main site google rating? our solution and ideas that not working well: rel="canonical" to source website (we already have it before google change without results) meta "original-source" but not have rating influence (we already have it before google change without results) Edit:: maybe we should delete rel="canonical" from main website articles that refer to our other small websites (because this articles in main website not indexed in google)? Thanks in advance

    Read the article

  • 30x Redirects and google page ranking

    - by Mechaflash
    I'm building a Drupal site where much of the content is going to be in static on specific pages. In Drupal, each piece of content (whether you like it or not) gets created its own page (node). To ensure that users do not view these nodes, I'm thinking about setting up a 30x redirect or a flat out 30x not found. Will this method effect me negatively for google? Is there a different method that you could propose that may be better?

    Read the article

  • Do wordpress websites get indexed quicker by SE than a regular website?

    - by guisasso
    I registered a couple of domains with the names of categories of products we sell. I then installed wordpress in one of those domains and played around with it for a bit, and left it alone for about a month. There was a link on my regular website to that secondary website and that website was also registered in google's webmaster tools, but that's that. I then searched on google last week for that product category, and to my surprise, that secondary website showed up in the 2nd or 3rd page on google. Now my question is: Do search engines index wordpress websites quicker? I had given up on using wordpress for that website, since it's so simple, but should i use it, would it give me better results? Thanks in advance for the help, if the question is not deleted.

    Read the article

  • Moved sitemaps to a different subdomain and losing search referrals around the same time. Red herring or correlation?

    - by er1234
    We started to lose search referral traffic around the same time that I moved some of our sitemaps to a subdomain. Could this have hurt us? I followed Google's steps to creating a sitemap under a different subdomain. The new sitemaps.foo.com subdomain is being crawled and indexed well. Both www.foo.com and sitemaps.foo.com have been verified in Google Webmaster Tools. They appear as distinct sites. Is this correct? I can't find a way in Webmaster Tools to say "Hey, sitemaps.foo.com is really owned by www.foo.com, so show them together and make sure to attribute sitemaps.foo urls to www.foo" Our www.foo.com/robots.txt Sitemap: http://www.foo.com/sitemap.xml Sitemap: http://sitemaps.foo.com/subdir/sitemap.xml.gz

    Read the article

  • How can I track scrolling in a Google Analytics custom report?

    - by SnowboardBruin
    I want to track scrolling on my website since it's a long page (rather than multiple pages). I saw several different methods, with and without an underscore for trackEvent, with and without spaces between commas <script> ... ... ... ga('create', 'UA-45440410-1', 'example.com'); ga('send', 'pageview'); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 100, true]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 75, false]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 50, false]); _gaq.push([‘_trackEvent’, ‘Consumption’, ‘Article Load’, ‘[URL]’, 25, false]); </script> It takes a day for counts to load with Google Analytics, otherwise I would just tweak and test right now.

    Read the article

  • How should I handle search engines auto-correcting the spelling of a site's name?

    - by Nathan G.
    A client's site and company is called 'Tranin Communications' (Tranin is her last name). It ranks well in searches for her name but rather poorly in searches for the name of her site/company. I realized that this is largely due to* search engines (Google especially) assuming that the query was misspelled and automatically including results for both 'train communications' and 'communications training'. Both of those queries yield many high-ranking sites that completely drown out hers. Sometimes Google even shows results for 'communications training' instead of 'tranin communications', hiding her site altogether. Is there a way to report an incorrect auto-correction to Google or something I can do to discourage this behavior (e.g. a meta tag)? My searches have come up cold, any suggestions would be appreciated. *I've come to this conclusion because her site ranks very highly when the same queries are put in quotes.

    Read the article

  • How to purge old links in google from an old domain.

    - by jbcurtin
    Hey all, Recently, I uploaded a new site to an existing domain and I'd like to figure out how I can forward all links to said domain to a new domain. I'm looking for a wordpress solution if possible, but in the end I I seem myself writing a small header script that I will paste into ever directory's index file saying header('Location:http://xxx.yyy.zzz') Is there a cleaner way to do this without having to resort to managing the whole file structure? No, I do not have access to the apache runtime. Unfortunately it is a shared-host server. Thanks in advance.

    Read the article

  • 80,000 hits on irrelevant topic in 1-2 days [closed]

    - by John
    On 3rd of Nov 2012 somebody started this thread : http://forums.hostgator.com/advice-needed-new-account-t214566.html?t=214566 subject "Advice needed on new account". I visited it on 5th and when I saw the total views I was stunned to see figure of 80,000( while other threads were having views from 50-300). I even made a post using name of rag_gupta. Then I simply typed in Google in IE (I work on Firefox) : Advice needed on new account ---- and yes this same page was in no#1 position. Subsequently I tried to create similar wording article in Hubpages to see how it'd fare. Unfortunately it was not allowed to be published. Then I published in blogger.com. But hardly any hits. Considering only the first two posts in the thread what could have driven google to rank it highly?

    Read the article

  • Usefulness of blogs for multi language ecommerce site

    - by jawilson
    I have a multi-language ecommerce site for which I am trying to improve its online marketing. There's currently a blog for the english shop, on a subdomain, which doesn't really get very much traffic. I'd been recommended to set up blogs for each of the different language shops and try to generate traffic to each of the blogs. I'm wondering if this is really worth the effort and cost (blog devt costs and ongoing translations required). Does anyone have any experience/advice on this at all? Perhaps where they've used this approach successfully or otherwise?

    Read the article

  • Does the Instant Preview in Google webmaster tools takes Robot.txt in account?

    - by rockyraw
    Is that the way to go If I want to visually see what the googlebot see? I'm trying to check a folder which I have just blocked in my robots.txt. If I fetch the folder as google bot, It fetches ok, so that doesn't tell me nothing about whether the block is working I know there's a tool to check for blocking, but it is dependent on the input of the robots.txt Therefore I've tried the Instant preview, and I don't get a preview for what the bot sees ("pre-render), so I think that means that it's because the robots.txt blocks it; however - I don't see the bot tried beforehand to access my updated robots.txt, so I'm not sure how does it know that this folder is blocked? (it does preview another new folder, that is not blocked)

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • Subdomains vs. subdirectory – status as of 2012.

    - by Quintin Par
    This following question by Jeff was in 2010 and I wanted to check how things have changed in the past 2 years. My problem: I run a site with most of the content distributed to subdomains that’s are user based. E.g: Joe.example.com John.example.com Jil.example.com So all of these subdomains have the content and the main site example.com becomes a mere dummy listing all the subdomains. Now the question is, as of 2012, how is google treating domain authority and page rank in this case? I understand the notion of page rank as page per se but when it comes to domain authority will the parent domain have the cumulative effect of the domain authority or will it be spread out?

    Read the article

  • What is best practice for search engines when a website is under maintenance?

    - by jamescridland
    I need around a week to transition a heavily data-driven website from one back end to another. During that time I do plan to attempt to keep some pages live, but they won't all work well or look brilliant. Some pages won't work at all. What is the best way to ensure I don't scare Google? Should I hide everything from robots.txt, or mark everything that doesn't work as "503", or are there other things that I should be considering?

    Read the article

  • Can I use nofollow for offsite links without it affecting my page rank?

    - by Jack
    What I have is a page with almost all offsite links. Each clicked link is forwarded on to the destination. What I would like the search engines to do is to index the text between the anchor tag and not follow the link itself. <a href="somelink">Index This Text Only</a> I've read several articles and they all seem to contradict themselves as to when to use nofollow. What's been happening over the past 2 months that the site has been live is that both Google and Bing are crawling the site as well as all the links on the site that it has been forwarded to. The search engines are now generating a lot of 404s for images and files that never existed on my site but rather seems to correlate to the site it was forwarded to. The search engines don't seem to honor the 302 header when forwarded. I would like to get a definitive answer on the nofollow tag as it relates to my situation. Can I use nofollow to stop the 404s and if so, will it affect my page ranking negatively?

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • URL is generating a /#!/splash-page

    - by user32642
    My site for some reason is generating a shebang - /#!/splash-page on the URL. For example when I type www.modernvintage1005.com, the browser returns www.modernvintage1005.com/#!/splash-page and every subsequent page is /#!/about, /#!/contact, and so forth. There's absolutely nothing on the Google about this. There is a lot of rewrite help to eliminate .index.php from the home page, but that's it. How do I rewrite it to just say domain.com and domain.com/about.html, etc.? Here is my .htaccess file if you need to see it. # Rewrite Rule <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # compress text, html, javascript, css, xml: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript AddType x-font/otf .otf AddType x-font/ttf .ttf AddType x-font/eot .eot AddType x-font/woff .woff AddType image/x-icon .ico AddType image/png .png </IfModule> ## EXPIRES CACHING ## <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access 1 year" ExpiresByType image/jpeg "access 1 year" ExpiresByType image/gif "access 1 year" ExpiresByType image/png "access 1 year" ExpiresByType text/css "access 1 month" ExpiresByType application/pdf "access 1 month" ExpiresByType text/x-javascript "access 1 month" ExpiresByType application/x-shockwave-flash "access 1 month" ExpiresByType image/x-icon "access 1 year" ExpiresDefault "access 2 days" </IfModule> ## EXPIRES CACHING ##

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • My website google index suddenly increase and also suddenly reduced

    - by Jeg Bagus
    Yesterday before i sleep, i check my site index. i get about 50 index on google. today morning when i wake up, i get 250 index on google. and my page ranking better on several keyword. than i add 1 page and 2 canonical link, add 404 page header, and resubmit sitemap. and after 2 hour, its going down to 50 index again. and my page ranking just rolled back to previous day. what is actually happen? is it because i resubmit sitemap? until now, google still crawl my website. do they try to refresh the index?

    Read the article

  • Could crosslinking using very general anchor texts be a reason for a drop in rankings?

    - by webmasters
    I have crosslinked 20 sites and I thought I have been penalized for this, asked this question and some experienced members told me maybe that crosslinking may not necessarily be the reason. The sites are on same host, different C class IP and every site in linked to each other. Each site targets long tail kewords. Site 1 - BMW Used Cars - and my area Site 2 - WW Used Cars - and my area And so on... When I crosslinked them (in the sidebar), I did it for the users; instead of repeating the terms used cars and my location over and over (since my users are targeted) I just crosslinked them using the brand: BMW, WW. Targeting locally, my niches are not overly competitive, so I did not need to many external links to rank on various positions on the 1st page. I'm thinking that when I chose to link using only the brand, google might have thought I wanted to actually rank for BBW and WW, hence the drop in my targeted local traffic. Could this be? I now have no-followed the links and I am noticing a slight recovery, but if it's not a interlinking penalty it would be a shame not to benefit from my links.

    Read the article

  • Avoid pagination pages from appearing higher than real content in SERPS

    - by WordPress Developer
    I have a gallery-type website that has about 20k of pages and naturally it uses pagination. However, sometimes /page/2 appears higher in search results than /post/201339 for example. I'd like to give emphasis to the actual content (posts, videos, whatever the site is about) and not on pages that merely list this content in a paginated matter. What is the best way to avoid this issue? Maybe a NOINDEX,FOLLOW meta tag on the paginated pages?

    Read the article

  • How to specify importance of html elements?

    - by Julien Fouilhé
    Is it possible to specify what elements of the page are important, or, more specifically, what elements of the page are not important? I'm using HTML5 new elements (nav, header, footer, section, article, aside...), but in the description of the website, there's sometimes my login form (in the header of my page though) in the Google description of my website pages... Is there a solution to resolve this problem? Thank you.

    Read the article

  • you will see the ugg boots outlet of type, color, size

    - by skhtyu skhtyu
    These humans taken the apple through hurricane. Lots added humans accompany in their friends' traces to use these affidavit footwear, they are fabricated from top-grade Foreign merino uggs for cheap. Amazing abundance and aswell amore tend to be assured aloft accustomed materials.bailey button uggs which are at aboriginal acclimated by Australian accept set off a abnormality all over the apple these days. plenty of celebrities are usually spotted putting them on purple uggs this aswell allures abounding individuals to get. For those who alarm for to access due to the fact, go to internet vendors & they can accepting superior twos awash with affordable prices adapted now there.When researching arrangement pink uggs through web food or arrangement sites, ensure to see their own go aback and aswell acquittance procedures afore you achieve your choice. So as you will see the advantage of type, color, admeasurement as able-bodied as absolute acclimated seems to be amaranthine and now application the accession of the uggs cheap and a clog up adaptation you're a lot added ashore for choice. So no amount what, the absolute "in" affair for your chiffonier this advancing year is in achievement affected ugg classic short for ladies, and you're artlessly abiding to acquisition a brace that's aural your budget.

    Read the article

  • Google showing meta descriptions from other pages in the SERPs

    - by ojek
    Recently I added some content to my website and submitted a sitemap files to Google. Now that Google has indexed those pages, I discovered that some of words and sentences that are listed in Google that lead to my website are having their meta descriptions somehow mixed up. Here is how it works: After I put a sentence on Google to check for my website ranking, I can see a page title in the results of page1, a link to page1, and a description from page2. Since my website is a forum, if Google mixes the links of threads, it leads my users to different kind of material that they were looking for. Is there anything I can do about it?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >