Search Results

Search found 21507 results on 861 pages for 'google spreadsheets'.

Page 249/861 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Adwords: Is there a drawback to setting a really high max CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • Sortie imminente du Chrome Web Store, Google envoie un mail d'information et organise un évènement sur Chrome demain

    Sortie imminente du Chrome Web Store Google envoie un mail d'information aux développeurs d'extensions pour Chrome Google va envoyer une première série de mails d'information aux développeurs d'extensions et de thèmes pour Chrome, signe que l'ouverture de la boutique en ligne Chrome Web Store est imminente. Gregor Hochmuth, chef de produit de Google Chrome Web Store, a lui-même annoncé que les développeurs seront informés avant le lancement officielle de la boutique. Le but de ces messages sera d'indiquer les modifications apportées au magasin afin que les développeurs puissent vérifier l'impact sur leurs codes et y apporter des modifications avant la publication en...

    Read the article

  • Is this a link scheme? If so, what to do? what problems can i face?

    - by guisasso
    I was asked to remodel a website, and decided to check its rank on alexa. Surprisingly, there are many, many different websites linking to it, none relevant. One particular thing about it is that none of these urls work, and they all display the exact same error when accessed, which to me is a very good indication that this is some sort of linking scheme. (besides the somewhat obvious names, it even says scheme in one of the urls !?) If so, how should i proceed about this website? What can i do if this is in fact a scheme, how can this hurt the website, what types of problems can i face, and what can i do about it? addurlnow . info dirlist15.addurlnow . info/Business___Economy/Services/page-12.html linkdirectory101 . info dirlist16.linkdirectory101 . info/Business___Economy/Services/page-15.html seonetblog . info dirlist52.seonetblog . info/Business___Economy/Affiliate_Schemes addurls . us dirlist21.addurls . us/Business___Economy/Services/page-10.html webdirectoriessite . info dirlist20.webdirectoriessite . info/Business___Economy/Services/page-6.html addurlstore . info dirlist10.addurlstore . info/business___economy/services/page-14.html ukwebdirectorys . info dirlist21.ukwebdirectorys . info/Business___Economy/Services/page-13.html

    Read the article

  • Is this DFP error message the reason my ad won't show?

    - by Eric
    I'm setting up DFP to display ads and I have an ad tag (Javascript) from adtechus.com. The tag looks like this: <script src="http://adserver.adtechus.com/?adrawdata/1.0/1111/11111/0/0/ADTECH;loc=100;noperf=1;"></script> When I paste that tag into DFP, I get an error message saying it does't recognize the tag format: ...and, more importantly, my ad isn't showing on the page. DFP seems to be taking my adtechus ad tag regardless and working with it, despite the error message. But could that be the reason my ad isn't showing? And how can I fix it?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • Sudden drop in Total Indexed pages and increase in 'Not Selected' number.

    - by Pravin
    My blog is around 1 year old and have PR2. The average daily pageviews upto last 1 week were 1800. The total number of posts are 180. Though I have only 180 total posts, the total number of Indexed URL was increasing and it was as high as 510. But in the month of Sept2012, the total number of Indexed pages dropped from 510 to 214. The drop was sudden and it is now increasing very slowly. Also, the other main concern is huge increase in 'Not Selected' number. It is currently 814. I have never posted any post again and never copied any idea from any other blog. But I do use internal linking to some older post those are related to the new posts. The questions are:; Why there is sudden drop in the 'Total Indexed' pages. Why there was increase in total indexed pages to 500 even though the total posts were only 180. As the drop in 'Total Indexed' was in the month of sept2012, I was getting same organic traffic and it was steadily increasing till last week and then there was a 50 drop in the total pageviews. Why. Now, again the traffic is becoming to normal but still there is a problem. Is increase in the 'Not selected' number is a cause of drop in 'Total Indexed'? How to prevent or reduce the number of 'Not Selected' even though I do not have any duplicate post withing blog. Is the 'internal linking' to older post creating 'Not selected' problem? Should I edit my 'Robot.txt' to avoid crawling of labes that may be creating duplicate posts or something like that, if so, what is correct robot.txt. I have uploaded the screenshot of the graph of Webmaster Tools. Please take a look and give suggestions. Please help. Thank you in advance.

    Read the article

  • Picasa 3.9 login fails with 2-factor authentication

    - by Paul Pomes
    I've installed Picasa 3.9 via the instructions at webupd8, however the login window keeps failing with the message, "You must be connected to the Internet to use this feature." If "Try again" is tried I'll successfully pass the first login screen of username and password. Next I'm prompted for the verification code which then takes me back to the "You must be connected to the Internet to use this feature" screen again.

    Read the article

  • How do I get Picasa 3.9 to play .MOV files?

    - by user44405
    I have been using Picasa for years, first in XP then in Ubuntu from 8.04 onwards & now in Lubuntu 11.10. For 11.10, I downloaded Picasa3.9.0 from FileHippo & installed it in Wine. Everything works well, except it doesn't play videos from my camera - it just shows a thumbnail - whereas earlier versions played videos OK. The videos appear as **.MOV (QuickTime video) in file manager, & can be played by, for instance, GNOME MPlayer or Movie Player (but not properly by Banshee...). Is there some simple way of getting Picasa 3.9 to play them?

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • When Googlebot sees a link, will it click it or navigate to it?

    - by FakeRainBrigand
    My site uses pushState and JSON data to display content. So, for example, this might appear on my page: <a href="/some/page">some page</a> The JavaScript then prevents the default action (following the link), and instead renders a view (using a different api, such as /getjson?some_page). $('[href]').click(function(){ history.pushState(...); handleURL(...); }); Assume my server will respond to requests at /some/page with a pre-rendered version. My questions are: will Googlebot receive the prerendered version, or allow JavaScript to instead invoke pushState, etc. if it doesn't make the direct request, will it wait for AJAX content to be loaded? does Googlebot implement pushState, so it will show the proper URL in search results?

    Read the article

  • How to SEO Optimize Javascript Image Loader?

    - by skibulk
    I am building an image-centric catalog website. It catalogs collectible gaming cards numbering 100,000+ pages. Competitor sites recieve millions of hits each month, so with the possibility of excessive traffic, I need to moderate image bandwidth while also optimizing for image SEO. I'm looking for some tips on doing so. Each page on the site features one card with appropriate tags and descriptions. There are however four images for each card - one on matte cardstock, one on foil cardstock, one digital, and one digital foil. In a world with unlimited bandwidth and no-wait page loads, I'd simply embed all four images on the main product page with titles, alt tags, and captions to rank them according to their version keyword. In reality a javascript gallery image loader seems appropriate. Here is a simplified example of my current code. Would this affect SEO in any way? Should I be doing anything differently? Note that I don't want to create a page for each image as I'd have to duplicate the card tags and descriptions on each one, diluting PR for the main page. Thanks for any insight! <script type="text/javascript"> document.write(' <img src="thumbnail1.jpg" data-src="version1.jpg"> <img src="thumbnail2.jpg" data-src="version2.jpg"> <img src="thumbnail3.jpg" data-src="version3.jpg"> <img src="thumbnail4.jpg" data-src="version4.jpg"> '); </script> <noscript> <img src="version1.jpg"> <img src="version2.jpg"> <img src="version3.jpg"> <img src="version4.jpg"> </noscript>

    Read the article

  • Clicks counting and crawler bots

    - by Dennis
    I am currently running a small affiliate-program for Facebook users. We use an auto-poster to publish links to fan pages. Every hit is stored in our database and we have included a 24 hour reload block for the IP-addresses. My problem right now is that the PHP script also stores every hit from all the bots that crawls my website. Now I was thinking to block those bots with the robots.txt of my website but I am afraid that this will have a negative effect on my AdSense ads. Does anybody have an idea for me how to work this out?

    Read the article

  • AdWords Keyword Tool planner CPC completely different to real CPC?

    - by steve
    I'm new to AdWords, and trying to figure out the best keywords to use. I go to Adwords Keyword Planner, and typed in an example keyword. It gives me an average CPC of $0.94. But when I go to set up a real campaign and type it the same keyword, I get an error saying 'below first page bid estimate' which is $8.75. What gives? Is there a better way to get more accurate feedback on how much this will cost?

    Read the article

  • My approach towards SEO implementation needs improvements. [closed]

    - by Eritrea
    I have always copy/paste this below code as a template for meta tags on project, but I think they are not effective as they could possible be. So, I need to know if there is anyway I can improve it. suppose I have a site called coop.com for a company called Coopm and we do import and export as a business in France. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" /> <meta name="rpbots" content="index, follow" /> <meta name="description" content="ccop is a major import and export company" /> <meta name="keywords" content="coop, coop.com coop company, import export, import export france, " /> <meta name='REVISIT-AFTER' content='30 DAYS'> <title>coop is an import and export company located in France</title> </head> The reason I am asking, is because I want to know if there are better ways of constructing your SEO tags, and construction.

    Read the article

  • Chrome in the launcher does not open with keyboard hotkey (Super + #)

    - by Jin
    I'm running Ubuntu 14.04 LTS and I have chrome as the first application locked in my launcher. When I click it or press Super+1 the Chrome icon just flashes, but never opens the app. All other apps open fine. I have to manually find it in Unity and launch it from there, or from Terminal. I've set this to launch properly on another machine but I don't know why it's not working on my laptop. Why does this happen?

    Read the article

  • Directing crawlers to content in language per language sub-domain

    - by Noam
    I have a site with multilingual website with many pages (40M). The site has UGC, and each translation is actually for the titles. Each sub-domain points to the same content with different titles per language. As far as I understand, each sub-domain should be indexed by search engines, meaning they will actually need to crawl 40M x supported-languages. So I thought it might be best to direct each subdomain crawler, to pages that are fully in that language (titles + UGC). Is there a way to do this? Should search engines understand this on their own?

    Read the article

  • Sub Domain tracking with Analytics filters

    - by Nick
    Hi All, We currently have Analytics tracking codes running throughout our site including our Sub Domains. What I would like to do is create different Profiles under the same account segmenting the sub domains by means of filters. Currently I am just excluding the hostname of the main website by using the following custom filter: Exclude: Hostname Filter pattern: ^www.mydomain.co.za(.*) I know this isn't the proper method of doing this though and have some of the main domains links coming through in the data. Ideally I would just like to include anything from: sub.domain.co.za Any help would be greatly appreciated. Thanks

    Read the article

  • Adwords: Is there a drawback to setting a really high CPC to learn what works faster?

    - by Rob Sobers
    I'm toying with increasing my max CPC really high on all my keywords so ensure my ad gets shown in the top spot on page one in order to draw more clicks. I think this will be a good way to quickly figure out whether the ads I'm writing have a decent CTR and, more importantly, whether the landing pages I'm building are converting. Since I can set a max daily budget for my campaign, I won't risk breaking the bank. I can't think of any drawbacks, personally. Am I missing any?

    Read the article

  • Reason why a Brand new website is ranking for a top keyword? [on hold]

    - by Prasad EBK
    Its been noticed, one of our (new)competitor website is ranking 5 for a top keyword with high competition. The website is barely 2 months old. When I checked not much SEO is done on the website other than basic title/desc tags. No backlinks. The website pushed down our website and took its place for the keyword. The only reason that came to my mind is the latest penguin update. Or is the ranking just temporary???, will it eventually be pushed back?? its been holding on for atleast one month and its irritating. Thanks in advance.

    Read the article

  • Can mass different log-in pages result in SEO duplicate and/or low quality punishments?

    - by Noam
    I have internal pages that rely on an external API which I would like to build upon user request. Two options I thought about: Make lots of 'thin' pages that specifies that if you want content about X, you need to log-in, and then the page will be built. Pros: user understands what he'll get when logging in. Cons: SEO implications of such a solution due to the mass 'low quality' and 'cross-sites duplicate content' Make them all redirect to ONE same generic log-in page. Pros: No duplicate low quality content. Cons: Lots of internal links to the same log-in page. Which would you recommend?

    Read the article

  • Send an E-mail Quickly with the GmailThis! Bookmarklet

    - by Asian Angel
    Sometimes you need to send out a quick e-mail for a project that you are working on, something really important that you just remembered, or perhaps just a note to yourself. If you use Gmail and like keeping things simple then join us as we look at the GmailThis! Bookmarklet. GmailThis! in Action To get set up all that you need to do is visit the webpage (link provided below) and drag the bookmarklet to your “Bookmarks Toolbar”. For our example we decided to go with the “personal note” approach. As you can see here we selected/highlighted a portion of the text and then clicked on our new bookmarklet. The bookmarklet will automatically copy and paste the name of the webpage, the URL, and any text that you selected/highlighted into the new e-mail. A nice feature that we liked was that it opened in a new temporary window to help focus on composing our letter. This is what you will see when you have finished your letter and clicked “Send”. The window will automatically close itself after a few seconds so that you do not even have to worry with it afterwards. Looking at our “Inbox” there is our new e-mail looking oh so nice. Conclusion If you need to send out a quick e-mail using your Gmail account then this bookmarklet makes it as quick and simple as possible. This is definitely one to add to your bookmarklets collection. Links Get the GmailThis! Bookmarklet for your browser Similar Articles Productive Geek Tips How to Send and Receive Hotmail from Your Gmail AccountShare Your Favorite Webpages with the AddThis BookmarkletPower Up and Manage Your Windows Send To Menu with Send To ToysTurn off New Mail Notification for PocoMail Junk Mail FolderCreate an Email Template in Outlook 2003 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online

    Read the article

  • Repeat use of Schema / Rich Snippets Markup i.e LocalBusiness Data

    - by bybe
    I am unable to find official wording and I'm hoping that some Rich Snippets/Schema Guru can give me some insight into proper usage of repeated content when it comes to using markup. I'm building a site that wants to use Schema as the markup type and the owner would like as much usage as possible. The business name, telephone and address will appear on every page now is it valid or even useful to use Rich Snippets on every page where this information is displayed. For example this information appears in the header, and footer of every page of the site and too give you an example of my current markup see below: <body itemscope itemtype="http://schema.org/LocalBusiness"> <header> <a itemprop="url" href="http://www.domain.co.uk/"> <img itemprop="logo" src="image.png" alt="Company Name Logo" /> </a> <span itemprop="telephone">01202 000 000</span> </header> <div> This is where the content will go</div> <footer> <span itemprop="name">Company Name</span> <span itemprop="description"> A small little bit about this company</span> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">Address Goes here</span> <span itemprop="addressLocality">Area Here</span>, <span itemprop="addressRegion">Region Here</span> </div> </footer> </body> !-- Local Business Schema Now Closed --> So as you can see above this information will be displayed on every single page.... Is this valid or bad to repeat usage of this information in schema format...

    Read the article

  • Is AdWords ad blocked from top spots of SERPs until it is reviewed?

    - by Omeoe
    I have an AdWords ad and a keyword with a Quality Score of 10. Inferring from CPC from actual clicks, max CPC is set way beyond that of the third advertiser from the SERP for this keyword (there are three ads in the top). Still the ad is shown on the 4th spot which located either on the right or at the bottom of the SERP. The only catch is that the ad's status is "under review". Is it the reason why it's blocked from the top spots?

    Read the article

  • Very few visitors on Analytics: incorrect setting?

    - by Akaban
    it's quite a long time Analytics is making me crazy: I have a 2 years old website, started with Aruba (an Italian provider) and then transfered on Hostgator. It's a blog Wordpress + MyBB forum, and on both the platforms I've the Analytics code in the footer. The problem is that the stats on Analytics are simply ridiculous compared to the numbers reported by the Aruba (before) and Hostgator (then). I think that the numbers of Aruba/Hostgator are correct, simply because just the daily users connected on the forum is higher than the Analytics numbers. I know it's a really confused request, but maybe you can help me to understand what's the problem.

    Read the article

  • Disable incognito in chrome or chromium

    - by TheIronKnuckle
    I'm addicted to certain websites to the point where it's interfering with my life regularly and sick of it. I want to install website blockers that aren't easy to circumvent. In Chrome, incognito mode is easily accessible with a ctrl-shift-n. That is ridiculous. Whenever I feel an urge to go on an addictive website, it doesn't matter what blockers and regulators I've got installed; three keys can get round them in a second. Simply uninstalling chrome isn't an option either, as it's way too easy to sudo apt-get install it right back. So yes, I want to disable incognito mode completely (and if possible making it totally impossible to get it back). I note that some guy has figured out how to do it on windows with a registry entry: http://wmwood.net/software/incognito-gone-get-rid-of-private-browsing/ If it can be done on windows it can be done on ubuntu!

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >