Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 212/592 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • What is more preferable, Creating dedicated domains for mobile apps that shares different content or associate them with folders in one domain?

    - by Abdullah Al-Khalidi
    I want to consult you in an SEO matter which i am completely lost with, I've built a social mobile application that allows users to share text content and made all the content that appears on the application available via the web through dedicated links, however, those links cannot be navigated through the website but they are generated when users shares content through the app to social media networks. I've implemented this method on three applications with totally different content, and I've directed all generated URLs to be from the main company website which is http://frootapps.com so when users shares something, the url will change to http://frootapps.com/qareeb/share.aspx?data=127311. My question, which one is more preferable, a dedicated website for each app that uses such method? or it is ok to keep doing it the same way I am doing it?

    Read the article

  • Why do people crawl sites without downloading pictures?

    - by Michael
    Let me show you what I mean: IP Pages Hits Bandwidth 85.xx.xx.xxx 236 236 735.00 KB 195.xx.xxx.xx 164 164 533.74 KB 95.xxx.xxx.xxx 90 90 293.47 KB It's very clear that these person are crawling my site with bots. There's no way that you could visit my site and use <1MB bandwidth. You might say that there's the possibility that they could be browsing the site using some browser or plug-in that does not download images, js/css files, etc., but the simple fact of the matter is that there are not 90-236 pages that are linked from the home page (outside of WP files), even if you visited every page twice. I could understand if these people were crawling the site for pictures, but once again, the bandwidth indicates that this isn't what is happening. Why, then, would they crawl the site to simply view the HTML/txt/js/etc. files? The only thing that I can come up with is that they are scanning for outdated versions of WordPress, SQL injection vulnerabilities, etc., which makes me inclined to outright ban the IPs, but I'm curious, is it possible that this person is a legitimate user, or at the very least, not intending to be harmful?

    Read the article

  • Transfer page from internal to external

    - by Theo Gulland
    Afternoon all! Currently I have a website with a list of audio products (essentially a search engine for audio deals). http://www.soundplaza.co.uk Once you go to the details page, you can then press the 'view deal' button to go to providers site e.g. = http://www.soundplaza.co.uk/all-deals/113/bookshelf-speakers/acoustic-energy-1 This jump between two sites is a bit harsh and I would like to show a transition page, to simply ease them into another site and not scare them off. Within this tradition page I will have a simple loading gif and some graphics showing that your transferring. QUESTION: What is the best way to send the details (link, product name etc) to this transfer page, to then wait 5 seconds, to then move on to the desired link... this can in NO WAY damage my SEO, if anything rel="nofollow" would be great if possible. Currently I have seen that you can submit form to the transition page, then you can use php sleep and then php header to transfer... however I am not sure if php header will transfer SEO value tot he provider? Any opinions would be great! Thanks

    Read the article

  • Is my Document Type Definition Still Up to Date?

    - by Sam
    Hi folks, currently all my php generated rich html webpages start with this line, which I have been adding to my tempaltes YEARS ago... <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> Am I still up to date and 2011-proof? or Should I be changing this into something fasterloaden/more agile/more flexible? I have heard of XHTML 1.1. When I remove this line all still works fine. Is it very much needed still? What are my alternatives? Thanks very much for your suggestions.

    Read the article

  • Increase traffic to a site through a site on subdomain [closed]

    - by user1716672
    Possible Duplicate: Subdomain versus subdirectory We have two sites, one is mainly a portfolio site (built with Yii framework) and the other is a digital shop (built with open cart) where we sell plugins and themes. The url's look like www.mydomian.com and www.store.mydomain.com. But of these sites are in the same server. We use google analytics tools and have no problem getting traffic to our store. But we have very little to our portfolio site and we want to increase our Google ranking for this site. Assuming increased traffic to our site will increase our google ranking, we were thinking to use URl masking so the link will be www.mydomain.com/shop and this will load www.store.mydomain.com. Will this count as hits for our portfolio site? Because the .htaccess rules will ensure the subdomain is served. So I dont know if these hits will count on our store or our portfolio site...

    Read the article

  • Easiest solution to setup payments for a conference registration page?

    - by Keith G
    I've got a fair amount of website development experience, but I've been asked to setup a conference registration page in short order. However, I have absolutely zero experience with shopping carts, payment processing, etc. What is the absolutely quickest and easiest way to get this thing up and running? Here are my criteria: Site is currently hosted on Godaddy.com and someone has suggested using their QuickCart We cannot use any option that visits the paypal.com domain because it has been blocked my a large segment of the potential audience (on a military base). Need a $0 option for speakers Cancellations can be accepted, so maybe something that could handle that would be a bonus There is no "product" other than a confirmation that they have registered for the conference.

    Read the article

  • Quick Question, robots.txt Disallow: /*/ does what exactly?

    - by Exit
    A SEO firm suggested changing the robots.txt to: User-agent: * Disallow: /*/ Allow: /ims/ I'm not sure what that would do, but my guess is that is would tell all robots to index nothing but the ims folder. I understand the wildcard, but I'm confused by the slashes and don't know how they would play out in conjunction with the wildcard. * Update * I didn't mention that there is a sitemap listed in the robots.txt file, but according to one tech blogger, he realized that sitemaps trump robots exclusions. So, even though this says in Google Webmaster Tools that everything with a trailing slash will not be indexed, the sitemap contains the important links. I did notice that the link count on Google went from 360 to 336, and the sitemap links under the URL scaled back to 3 from 6. I'm not sure the cause or what links were removed, though. Perhaps it cleaned out garbage. I'm still clueless why they would add in 'Allow: /ims/', that seems pointless. And a quick list of what would index according to the robots rules above (withouth the sitemap) using /*/: domain.com Indexed domain.com/page.html Indexed domain.com/folder/ Not Indexed domain.com/folder/page.html Not Indexed

    Read the article

  • Forum engine with full LDAP integration [closed]

    - by Andrian Nord
    We are looking for forum engine which may actually maintain user data into LDAP, maybe via mods. Core point is about ability to maintain the data, i.e. all user profile settings, like nickname, password, email, avatar, birthday and others (preferably configurable). One example of good ldap integration, level of which I'm expecting, is drupal's ldap integration, which allows to map any user's attribute into ldap and keeps it in sync with database. Year ago I've done a small research over existing Free&FOSS engines and find out few forum engines with LDAP integration, namely SFM, phpBB and something else. The most maintained solution were provided by phpBB3, which supports LDAP integration out-of-box, but it is unable to sync data with changes in LDAP server made by other software. Actually it wasn't even propagating changes back, I'm not saying about ability to map additional attributes (other than name/password/email). Also, I haven't found any forum with architecture which have proper abstraction over user settings, thus I doubt that this engines (including phpBB) are possible to mod such functionality without introducing dramatic changes into core codebase. More recent research showed that even some commercial software, like IPB is unable to keep it's database synced with LDAP directory and map additional attributes. In other words, all support I've seen so far is simple user creation upon first user's login, which is not good for us, as forum is not primary site and should not maintain it's own users base (to reduce risk of possible collisions). LDAP import is required due to many other services (ftp, email, jabber, drupal site) using same users base. Currently we have forum embedded into Drupal site, but we are unsatisfied with it's features. BTW, we are using Linux and this is not duplicate of this question, as it's author seems to be satisfied with behaviour described above. So, my question is: Are there any (preferably FOSS&free) forum engines that may import, export, keep in sync, or otherwise integrade with LDAP user database (preferably with ability to map additional fields to ldap attributes)?

    Read the article

  • Why does Google mark one e-mail as spam while does not the other?

    - by nKn
    I've a Postfix installation which works fine, I don't get any trouble with mails sent through a mail client (in my case, Thunderbird or RoundCube) when the To: address is a GMail account. However, I recently needed to use the PHPMailer tool to send some e-mails to some GMail accounts, so I configured an account to be used via SASL authentication + TLS. I don't mean mass mailing, just 2-3 mails. If I send the e-mail from the Thunderbird or RoundCube clients, the mail is not marked as spam. However, if I use PHPMailer, it always gets catalogued as spam. So I compared both headers and I just can't find the reason why the second is marked as spam while the first one is just ok. The first header sent from a mail client which is not marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230573oab; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) X-Received: by 10.60.23.39 with SMTP id j7mr45544050oef.20.1408471699715; Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id t5si27115082oej.10.2014.08.19.11.08.18 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:08:19 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id D8F69120293D; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from [192.168.1.111] (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES128-SHA (128/128 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id 910341202624 for <[email protected]>; Tue, 19 Aug 2014 19:08:17 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471697; bh=wKMX9gkQ7tCLv8ezrG5t4bICm/SSLQsNfTdZMToksWw=; h=Date:From:To:Subject:From; b=qRNcYVdmk+n3D1uuv0FInTx7/LzH2ojck9DgCmabFPvfke233lkojUOjezCUGx7iV DL8EayZ28mzzzHpB7ETeMzop/5OS3BmvFtGKVD9gzc78cDIFXTDoRFAnkRWDR2IOxI SOn5tiyODTFpkbDgJOndzQ6qL5K0S9ASNGCZrNL4= Message-ID: <[email protected]> Date: Tue, 19 Aug 2014 19:08:24 +0100 From: My Name <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.6.0 MIME-Version: 1.0 To: My other account <[email protected]> Subject: . Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit . The second header sent from PHPMailer which is always marked as spam: Delivered-To: [email protected] Received: by 10.76.153.102 with SMTP id vf6csp230832oab; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) X-Received: by 10.60.121.67 with SMTP id li3mr44086252oeb.17.1408471930520; Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Return-Path: <[email protected]> Received: from mail.mydomain.com (X.ip-92-222-X.eu. [92.222.X.X]) by mx.google.com with ESMTPS id w8si27103806obn.30.2014.08.19.11.12.10 for <[email protected]> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Aug 2014 11:12:10 -0700 (PDT) Received-SPF: pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) client-ip=92.222.X.X; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 92.222.X.X as permitted sender) [email protected]; dkim=pass (test mode) [email protected] Received: by mail.mydomain.com (Postfix, from userid 111) id 1999D120293D; Tue, 19 Aug 2014 19:12:09 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471929; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=K7tcPyArzSTY91VEw6mAAFtDurSGwgTLGkfUZdC5mqsg0g/1LzmZkgwdjj4NdJa6M E2kDz3dwYN8FcZmbampJYFXxj4NQVtSnzjiWV40rpfOFqD2rXDGNIyB2QOjBZZ4WK3 7s4lyoJ/BrdQH4en8ctLVsDHed/KpHD4iGFEl67E= X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on vpsX.ovh.net X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=ALL_TRUSTED,T_DKIM_INVALID autolearn=ham autolearn_force=no version=3.4.0 Received: from rpi.mydomain.com (unknown [77.231.X.X]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) (Authenticated sender: [email protected]) by mail.mydomain.com (Postfix) with ESMTPSA id B42AF1202624 for <[email protected]>; Tue, 19 Aug 2014 19:12:08 +0100 (BST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mydomain.com; s=mail; t=1408471928; bh=N1JuHq1S+8GrjHcEK3xn8P1JS+ygEBv5LKe0BiXuVJo=; h=Date:To:From:Reply-to:Subject:From; b=iXPM0tS36swudPTT4FOHHtPi5Ll6LbR60kNqCinZ8utcWoFE31SFTpoMEq5aCM5ux wQMdFiN8c6vkjRGabmvqFTTIbwJsrToHo/4+Lt5HEBoQQE2Y3T+xGmnmGAHCS6stKB yb7SVmtrIAsVtSMKA8VYIbmu2oYqV3afYt7g0OMQ= Date: Tue, 19 Aug 2014 20:12:07 +0200 To: [email protected] From: Trying another account <[email protected]> Reply-to: Trying another account <[email protected]> Subject: . Message-ID: <[email protected]> X-Priority: 3 X-Mailer: PHPMailer 5.1 (phpmailer.sourceforge.net) MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="UTF-8" . I also tried: Adding a User-Agent header to match the first one. Removing the X-Mailer header. No one of them made a difference. Is there some significant difference which is making the second e-mail to be marked as spam by Google?

    Read the article

  • Correct configuration of multiple Analytics trackers per page, spanning domains and subdomains

    - by Eliot Shepard
    My company publishes sites on a somewhat convoluted domain structure, and we're having trouble getting accurate numbers in Analytics when we have multiple trackers on the page. We publish under two brands (A, B). Each brand has a "national" site at A.com, B.com, as well as per-city "local" sites at eg. ny.A.com, la.A.com, sf.A.com, etc. Right now we're trying to track in these dimensions: Full network (A.com, ny.A.com, B.com, la.B.com, etc.) All sites in brand (A.com, ny.A.com, la.A.com, etc.) Inidividual site (ny.A.com) Here are the commands we're using on an individual site: _gaq.push( ['t0._setAccount', 'UA-XXXXXX-1'], // full network ['t0._setDomainName', 'none'], ['t0._setAllowLinker', true], ['t0._trackPageview'], ['t1._trackPageLoadTime'], ['t1._setAccount', 'UA-XXXXXX-2'], // brand ['t1._setDomainName', 'none'], ['t1._setAllowLinker', true], ['t1._trackPageview'], ['t1._trackPageLoadTime'], ['t2._setAccount', 'UA-XXXXXX-3'], // individual ['t2._setDomainName', 'none'], ['t2._setAllowLinker', true], ['t2._trackPageview'], ['t2._trackPageLoadTime'] ); We send the same commands to each account because we've had strange results when trackers were configured differently in the past. However, right now we're seeing inflated numbers for uniques on all three trackers. What is the correct way to configure this setup? Thanks for your time.

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • How often does Dreamhost change IP Addresses

    - by pjreddie
    So I just migrated our site to dreamhost because they are free for non-profits. However, right after I switched the nameservers over to them they changed the IP address of the site. So first they propagated out IP address x.x.x.180, then they switched it to x.x.x.178 and had to propagate that out. Point being it meant a lot of downtime since a lot of big DNS servers (like google) thought the address was still x.x.x.180 for up to 5 hours after they switched it. This is compounded by the fact that most our visitors to the site live here in Unalaska and we have local DNS servers that take a LONG time to update (like a day or more) since we get all our internet over satellite. So every time Dreamhost changes our IP address it can mean a day of downtime for us in our community. So my question is, how often do these changes take place? I asked Dreamhost support and they gave me a vague response: I wish I could say, however those changes happen at random times. They're not that frequent, maybe even months between updates, but there's no way to know for sure. First, I hardly believe that they don't know their own system well enough to give me at least some estimate or average. Second, is it worth looking at other providers so that I can get a static IP address? We were hosting the site here originally and hadn't run into this problem since we have a static IP here. We don't get a ton of traffic but usually around 500 hits a day or so, sometimes more if our stories are featured on statewide or national news broadcasts. So hours of downtime every time Dreamhost "randomly" decides to move our server location can be bad for our readership.

    Read the article

  • Affiliate software to attract incoming customers

    - by Steve
    I am close to starting a new website for a small business which imports products from USA to Australia. The wholesaler says he will allow my client to be the sole distributor for Australia & New Zealand. I'm not sure what CMS or shopping cart software to use yet, but it will need to include an affiliate system to allow advertisers to push customers our way. Do you have any suggestions for robust, flexible affiliate software? Thanks.

    Read the article

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • How can I decrease relevancy of Creative Commons footer text? (In Google Webmaster Tools)

    - by anonymous coward
    I know that I may just have to link the image to make this happen, but I figured it was worth asking, just in case there's some other semantic markup or tips I could use... I have a site that uses the textual Creative Commons blurb in the footer. The markup is like so: <div class="footer"> <!-- snip --> <!-- Creative Commons License --> <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/80x15.png" /></a><br />This work by <a xmlns:cc="http://creativecommons.org/ns#" href="http://www.xmemphisx.com/" property="cc:attributionName" rel="cc:attributionURL">xMEMPHISx.com</a> is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License</a>. <!-- /Creative Commons License --> </div> Within Google Webmaster Tools, the list of relevant keywords is heavily saturated with the text from that blurb. For instance, 50% of my top-ten most relevant keywords (including the site name): [site name] license [keyword] commons creative [keyword] alike [keyword] attribution [keyword] I have not done any extensive testing to find out rather or not this list even matters, and so far this doesn't impact performance in any way. The site is well designed for humans, and it is as findable as it needs to be at the moment. But, out of mostly curiosity: Do you have any tips for decreasing the relevancy of the text from the Creative Commons footer blurb?

    Read the article

  • Magento error when Fedex shipping from US to Canada

    - by Tony Lush
    When shipping from the US to Canada, the Magento 1.3.x shipping module allows Fedex ground shipping, but should not. International Ground should be the minimum available going to Canada, since Fedex charges a paperwork fee on top of the transportation costs. Is there a way to fix the module, or do we have to resort to removing Canada from Fedex's permitted countries and setting up an entirely different shipping option for Canada?

    Read the article

  • 403 error on index file

    - by John L.
    When I try to access index.py in my server root through http://domain/, I get a 403 Forbidden error, but when I can access it through http://domain/index.py. In my server logs it says "Options ExecCGI is off in this directory: /var/www/index.py". However, my httpd.conf entry for that directory is the same as the ones for other directories, and getting to index.py works fine. My permissions are set to 755 for index.py. I also tried making a php file and naming it index.php, and it works from both domain/ and domain/index.php. Here is my httpd.conf entry: <Directory /var/www> Options Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all AddHandler cgi-script .cgi AddHandler cgi-script .pl AddHandler cgi-script .py Options +ExecCGI DirectoryIndex index.html index.php index.py </Directory> Thanks

    Read the article

  • Why google is not crawling my website

    - by Aman Virk
    I am running a design and development blog http://www.thetutlage.com/ . From last couple of days my search traffic have been reduced from 70% to 10%. I myself is against black hat seo and all it do is write my own unique content almost everyday. Last week my search traffic was really good but now is dropping like heck. I have checked my webmasters dashboard and no message there from google. When i checked server logs i came to know last time google crawled my website was on 27 september 2012. Really i have no idea what i am doing wrong. I follow all google guidelines like bible, please help me

    Read the article

  • SSL and green address bar

    - by tinab
    I am new to SSL so can someone explain why my address bar turns green when I'm on certain sites beginning with https:// and sometimes it doesn't even though I know the site has SSL? Maybe these two nuances are not even related, but if I go to GoDaddy and order a new domain I notice their address bar is green the entire time I'm using the https:// protocol, but then I go to Victoria's Secret to place an order and even though it says https:// the address bar doesn't turn green.

    Read the article

  • Sharing banner on 3rd party websites, concerned about limited resources

    - by Omne
    I've made a banner for my website and I'm planning to ask my followers to share it on their website to help improve my rank. my website is hosted on GAE, the banners are less than 5kb/each and I must say that I don't want to pay for extra bandwidth I've read the Google App Engine Quotas but honestly I don't understand anything of it. Would you please tell me which table/data in this page should be of my concern? Also, do you think it's wise to host such banners, that are going to end up on 3rd party websites, on the GAE? or am I more secure if I use free online services like Google Picasa?

    Read the article

  • How to pay for servers without ads ?

    - by Matthieu
    Hi everyone ! I have a website whose goal is to provide a free educational service (like wikipedia does, not on the same scale obviously). I wonder how I will continue to pay for the servers when I reach high traffic. I don't want to use any ads or sponsor things, because this is not the goal of the website (just like wikipedia once again). Does "please donate" works ? If no, is there any alternative to a "private/premium not free" part in the website ? Thanks

    Read the article

  • Should I use the same Google Analytics code for my website and native mobile app?

    - by Drew
    My website is a forum. The native mobile app's only function is to access the forum and provide a better experience than can be provided through the browser. Given that the same content is being accessed, should I use the same Google Analytics tracking code? Or should I create a new property and select "Not a website" when it asks for the website? I was not able to find anything that discussed this and what would be best practice, given that the same content is being accessed. Thanks!

    Read the article

  • Why is Google Webmaster Tools crawling invalid URLS and showing 500 errors?

    - by Amos Kane
    Google Webmaster tools is reporting 12k+ 500 errors. Eeek! None of the URLS are valid- they all contain www.youtube.com. First, why is Google crawling these URLS if they don't exist? I supplied a sitemap, and they are of course not in the sitemap. I don't have a robots.txt blocking anything. I've checked for invalid redirects--none, and checked for unclosed tags or something that would throw www.youtube.com into the URL by accident--none. In every 'linked from', the referring URL is also a bad URL, with www.youtube.com in it. The Google Tools report no malware, and I can't check the server logs because the host won't give me access. Really stuck!! Any ideas appreciated!

    Read the article

  • Google Analytics Funnel Step Regular Expression Not Working

    - by scoarescoare
    The first step in a funnel is going to have a dynamic ending fragment. Examples: http://mysite.com/invite/tickle-party http://mysite.com/invite/pajama-party http://mysite.com/invite/puppy-party To allow for such dynamism, I provided this url for step one: \invite(.*) My goals work but the funnel visualization report shows 0 for everything. I know this problem is due to the regex in the funnel step because I copied this entire goal except I replaced \invite(.*) with /invite/puppy-party When I hardcoded /invite/puppy-party the funnel worked as expected. Why is my funnel report not working with my original funnel step url parameter?

    Read the article

  • Do Google's feed statistics include former users?

    - by jjackson
    I'm currently not using any sort of fancy stat tracking software such as feedburner, but I occasionally look at Google's stats in their Webmaster Tools just to get a rough idea of whether the number of subscribers is going up or down. This only gives the number of users subscribed through Google products, as they explain in their help documents: Subscriber stats display the number of Google users who have subscribed to your feeds using any Google product (such as Reader, iGoogle, or Orkut). Because users can subscribe to feeds using many different aggregators or RSS readers, the actual number of subscribers to your site may be higher. I used to use Google Reader very regularly but haven't opened it in a while now. The way I understand it, this will mean that even though I haven't touched any of those feeds in a long time I'm still technically subscribed to them and will therefore be included in Google's statistic. Is this correct? Also since Google runs Feedburner, does this have any effect on their stats as well?

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >