Search Results

Search found 9724 results on 389 pages for 'pro zeck'.

Page 88/389 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Does Google treat AWS IP addresses as related?

    - by ElHaix
    We are hosting several websites on one of our servers, and wondering if because they are on the same subnet that they have been somehow penalized. We are not inter-linking between websites. However in an attempt to have everything hosted in AWS, we will have some sites that we do want to be interlinked. If the sites resided on the same subnet, this could be bad. However, with AWS, we can allocate multiple elastic IP addresses that do reside on different subnets. How does Google deal with this?

    Read the article

  • Google Analytics async=true seems wrong in the Google documentation?

    - by leeand00
    In the Google Analytics async example, they state that in order to include more than one tracker, you need to setup your pages for asyncrous tracking, and they do so using the following code: <script type="text/javascript"> _gaq.push( ['_setAccount', 'UA-XXXXX-1'], ['_trackPageview'], ['b._setAccount', 'UA-XXXXX-2'], ['b._trackPageview'] ); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> The second tracker is not receiving any results. After looking at my tracking codes to ensure they are correct, I noticed that the ga.async = true statement is specified differently most of the time and is never set to a value of true, it's often set to async but never true. Could this be stopping my Analytics data from posting to the second tracker? or might it be something else? Also what calls should I look for in the Net tab in firebug to ensure that GA is being called when the page loads?

    Read the article

  • backlink anchor text / keyword stratergy post penguin

    - by sam
    Ive heard allot recently about over optimisation regarding backlink anchor text ofsite. What ive heard from seomoz was sites most effected by the penguin update had over 60% of their backlinks anchor text the same, so google saw this is unnatural and penalized them. Which kind of makes sense as its not normal to have such high density on one word / phrase. If i where building links for a gastro pub in london. (this is purely hypothetical). The sort of keywords i would go after are "gastro pub in london" and "london gastro pub" If i where to mix up the anchor text by having: gastro pub gastro pub in london london gastro pub would this be seen as ok ? or would these all be seen as broad match keywords and counted as one phrase, making me fall foul of the penguin update ?

    Read the article

  • Quoting people for website dev. work

    - by Jason
    Hi All, I have recently given some quotes to a few people. And I need some advice about how things should be done... Q1: I've seen, heard of and read about a lot of developers using free resource sites online to obtain free Privacy Policy, Disclaimers etc for their/customers websites. A customer I quoted the other day expected me to write/get a disclaimer for their site. Who in their right mind would expect a document like that from a Web Developer? I just told them that they need to sort that stuff out themselves with a Lawyer or something, and then to send it to me so I can paste it on a webpage for them. Q2: If you're charging per hour, and you estimate that the project would take 1week to finish (including testing/releasing), but you soon realise that you'll require more time, do you RE-quote them? Or do you just finish off the site at the original quote price? Q3: How do you figure out how much you will charge your customers? Do you charge per-feature, or per hour, or per day, or all of the above? Thanks :)

    Read the article

  • Nginx and Google Appengine Reverse Proxy Security

    - by jmq
    The scenario is that I have a Google compute node running Nginx as a reverse proxy to the google appengine. The appengine is used to service REST calls from an single page application (SPA). HTTPS is used to the Nginx front end from the Internet. Do I also need to make the traffic from the Nginx reverse proxy to the appengine secure by turning on HTTPS on the appengine? I would like to avoid the overhead of HTTPS between the proxy and the backend. My thinking was that once the traffic has arrived at Nginx encrypted, decrypted in Nginx, and then sent via the reverse proxy inside of Google's infrastructure it would be secure. Is it safe in this case to not use HTTPS?

    Read the article

  • How to create the right loop / jquery / picture wipe? [migrated]

    - by Razor
    <script type="text/javascript" src="slider/jquery.touch-gallery-1.0.0.js"></script> <script type="text/javascript" src="slider/jquery.touch-gallery-1.0.0.min.js"></script> <script> $('body').live("click", function() { for (var i=0; i<4; i++) { alert("bla: "+i); } }); $('body').live("swipeleft", function(){ var nextpage = $("#page00002").next('section[data-role="page"]'); $.mobile.changePage(nextpage, 'slide'); }); $('body').live("swiperight", function(){ var prevpage = $("#page1").prev('section[data-role="page"]'); $.mobile.changePage(prevpage, 'slide'); }); </script>

    Read the article

  • SEO - why did my google search rank drop?

    - by Brian McCarthy
    My nutrition shop websites for tampa and brandon were coming up on page one of google search results and now they have dissappeared. The 2 websites serve different markets although they are close geographically and have the same products, keywords, and layouts. Brandon, FL is considered a suburb of Tampa, FL and could be grouped into the Tampa Bay area. There's also a mirror site nutrition shop setup for orlando. Is the google search ranking drop because: 1) from a flash banner recently added on the front page 2) is the site just being re-indexed on all search engines b/c of new content? 3) is it seen as duplicate content as there are separate websites for the cities of Tampa and Brandon but with the same content? What can be done to fix this? Thanks!

    Read the article

  • A bounce-rate attack to manipulate SEO ?

    - by Denis Volovik
    This is a question to experienced people that might help us shed some light on the issue. We noticed a very strange behavior on our site, in Google Analytics. Some dude from Finland, namely, from Kouvola city is hitting one of our pages - only one page on our site, 'bout a hundred times per day, all with an average bounce rate of 90%+... This is causing our overall bounce rate to go up by 1 to 3% per day... which is very disturbing.. since we're trying to do our best in order to keep it as low as possible. And obviously having it jumped from ~24% to 27%, just because of that crazy dude is not making us happy at all... We tried implementing a geo-targeted script in order to catch this particular visitor and deliver him a juicy message, and it seemed like it helped in the beginning, it has stopped for a day or two, but now he's back... The geo-targeted script was also logging all IP addresses for page requests originating from Finland in order to find out more details and (in order to block them on the server level, later).. but thing is, it was all mainly cable or DSL connections with various, but not constantly repeating IPs... we are all wondering what is he up to really ? I think that this page should be kept updated with ideas on how to combat this and perhaps someone could also shed light on what it might be ? What is the reason for doing this "bounce-rate attack", as I call it? There was a similar question asked on stackoverflow earlier, with no meaningful answer - here - How to stop bounce rate manipulation.

    Read the article

  • Facebook connect And Yahoo.. How and what exactly happened? Is there way to import facebook friend's email id?

    - by Forte
    Hello, I have seen that yahoo now enables their users to import facebook friend's email addresses into their yahoo addressbook. As far as i know, facebook doesn't allow any API to fetch email addresses of any user on external websites. I have also seen that Yahoo imports email addresses only when the friend's have chosen not to display their contact email to themselves only. Many people in the world trying to implement applications using facebook's API to import email addresses of friend's (Only those email addresses which are visible on user's facebook profile) but API calls always return NULL to their requests. So i would like to know what exactly happened between facebook and yahoo? Does facebook have provided any concessions to Yahoo's addressbook importer application to import facebook user's email addresses? Is there any working API/method/way available to fetch email addresses of facebook friends who have chosen to display their contact email ids on their profile with 1: only visible to friends, 2: visible to everyone privacy settings? I have also seen that, facebook API page clearly listen that email/contact_email field's can be fetched using FQL. Nevertheless there is no official explanation on this issue of returning NULL when email/contact_email is requested from any API call. Regards

    Read the article

  • Is there a Content Management System that allows multiple & independent blogs to be running on one domain?

    - by Ron
    Hello Webmasters, I am a Wordpress fan, and I'm now building a new site and I'm not sure which CMS can achieve what I'm trying to do. I am building a food blog network for a bunch of cities in the US, and I want to my city pages to be independently running blogs themselves. So basically... Home Page - Its own blog with its own users, talking about Food in general Dallas Page (child of home page) - Its own blog with its own users Chicago Page ..... so on and so forth. The web layout and design will be all the same, but just trying to achieve 25~50 independent blogs on one domain. How can I achieve this? I'm hoping that I don't have to install Wordpress into as many subdomains that I create... Thank you for your help in advance. -RP

    Read the article

  • What is the right way to do this HTML: header with icon linked

    - by Hell Awaits
    I know how to make these examples look and behave the same. But I would like to know which is the right way to build a HTML structure. <a><img><h1></h1></a> - looks wrong because an inline element is inside of a block element <a><img></a><h1><a></a></h1> - the same a-element is defined twice. Also I'm not sure about markup inside headers Any other solutions ?

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Robots.txt never downloaded but some blocked URLs in GWT

    - by Zistoloen
    There is something I don't understand in Google Webmaster Tools (GWT) for my Wordpress site. In menu "Blocked URLs", it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am i missing something? User-agent : * Disallow: /*? Disallow: /wp-login.php Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content Allow: /wp-content/uploads Disallow: */trackback Disallow: /*/feed Disallow: /*/comments Disallow: /cgi-bin Disallow: /*.php$ Disallow: /*.inc$ Disallow: /*.gz$ Disallow: /*.cgi$ Disallow: /author/* I'm afraid my robots.txt doesn't block several URLs I want to block.

    Read the article

  • GEO Tool - commercial use

    - by Jens
    what i want to do: offer my clients the possibility to display their store as a small graphic (like google maps). just a small png with the location of the store. only the specific client is able to see this images and he pays for this area (not only for that :) ). So i'm looking for a api or whatever else to geocode(?) the adress and render a small (280*160px) image. google offers a premium licence (not cheap) ms bing openstreetmap - i'm not able to understand the 10^10 licences ;( any ideas?

    Read the article

  • "Email This" button with sideways counter

    - by aendrew
    I've been asked to build a design that has a "share this" area like below: I've built every aspect except the Email part of it -- any idea how best to do that? I've found http://getmailcounter.com/, but that displays a counter above. I'd personally just do a link, but it seems they're wanting some sort of analytics built in... Failing that, does anyone know of some sort of sharing system that looks like that and has all of those options? I'd just use AddThis, but its designs don't look very close to that... Thanks! Related: How to implement an email this link button

    Read the article

  • Password-free logins using your email address only?

    - by Mario
    The state of logins is horrendous. With each site having it's own rules for passwords, it can be very hard to remember what variation you used on any given site. Logins are pure pain. One thing I love about Craigslist is that it did away with logins altogether. I know this design may not suit every site, but there's something to their design that beckons to be repeated. OpenID is great on sites that have adopted it, but it's still not standard. Would it be feasible/wise to use an email address as a login and provide no password? The site would send a short-term key directly to your email address. You click on the link and you're in. When you're done, you "logout" and your key is terminated. I've toyed with this idea before. What concerns (i.e. spammers, bots, etc.) would make this impractical or unsafe and could they be overcome?

    Read the article

  • Using a SMTP Service for email

    - by Josh S.
    This may be a horribly obvious question, but I'm learning and just need someone to confirm it for me. I putting together a private social network that needs to email their members (through the social network software, Elgg) regularly. I'm hosting it on a shared HostGator plan (because they won't receive much traffic) and they'll email 10-1000 emails a few times a week. HostGator restricts you to 500 per hour. I'm also worried about deliverability. I've been searching up and down about how to throttle the emails so it will all send reliably... but then I came across the idea of an outside SMTP relay service. Would using an SMTP service resolve this issue? If so, any opinions on quality SMTP services?

    Read the article

  • How to handle possible duplicate content across multiple sites?

    - by ElHaix
    Let's say I have two sites that cover the same vertical/topic. one in the USA and one in Canada. Both sites have local-related content, which is obviously unique by location. However they will share common news or blog pages. How do I avoid getting hit with duplicate content on both sites for those news/blog pages? If the content is exactly the same, I'm guessing I would have to pick which site's content I want to noindex,nofollow, is that correct, and if so, is that all I have to add on the URL links to those pages, and the pages' meta tags?

    Read the article

  • How to use Google Analytics to track a development and production versions of the same site on different servers?

    - by Abe
    I have a website with two versions, one for production and one for development (testing new features). All of the code is under version control and the websites are on separate servers. Currently, I have the same Google Analytics Tracking code used on both sites. Since the code is under version control, it would be ideal to either have an if I am on production, use this code; else if on development server use that code clause. But I suspect that Google makes it easier to do something like this. I see that there are many ways to configure a GA tracking code, e.g. "a single domain" vs. "multiple top level domains". But it is not clear to me how to set this up. Also, if tracking code configured for a single domain has been on the development server, have I been picking up traffic to both sites, or does GA just ignore the second domain that I haven't registered?

    Read the article

  • Google Webmasters tools crawl error

    - by Shiro
    I am looking in to Google Webmaster Tools - Crawl Error section. How should I handle for those URL due their system / application showed invalid URL. e.g http://www.example/images/products/s_=enlarge_16gb.jpg but, I dunno what happen to yahoo groups, it break the link into http://www.example/images/products/s_= enlarge_16gb.jpg and I only make the top part become hyperlink, which is http://www.example/images/products/s_= Because of the URL, Google show crawl error, I got few error because of this kind of result or because other people typo error. How do I prevent this. I am sure I don't have the right go and change other people post. What is the solution for this. Thanks!

    Read the article

  • Static site generator with web-based file manager?

    - by user234
    I'm checking around options of static web site generators which led me to lots of articles about them! However, no word is spoken on how to edit files through a browser; it's always assumed you have either DropBox or some FTPish or terminal access. The only generator I could find that includes a browser based admin screen is Kirby (getkirby.com, mentioned at modernstatic.com) Besides the application above, what setup would you recommend to have both static HTML generation and web-based file management? Thanks!

    Read the article

  • get mysql_real_escape is giving me errors when I try and add security to my website

    - by Mike
    I tried doing this: @ $db = new myConnectDB(); $beerName = mysql_real_escape_string($beerName); $beerID = mysql_real_escape_string($beerID); $brewery = mysql_real_escape_string($brewery); $style = mysql_real_escape_string($style); $userID = mysql_real_escape_string($userID); $abv = mysql_real_escape_string($abv); $ibu = mysql_real_escape_string($ibu); $breweryID = mysql_real_escape_string($breweryID); $icon = mysql_real_escape_string($icon); I get this error: Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user

    Read the article

  • YAHOO and BING support for Index, Image and Mobile sitemaps

    - by kishore
    I know Google webmaster supports submitting Image, mobile, video and other types of sitemaps. YAHOO also mentions about mobile site map here. But does it support Image and video sitemaps. I could not find if BING supports any of these types other than XML sitemaps. Can someone please point me to any documentation on submitting Index, Image and Mobile sitemaps. Also does YAHOO and Bing support index sitemap files?

    Read the article

  • htaccess redirect problem

    - by jimbo
    Hi all, I am currently building a site and want to hide all development work on the site. I am using a htaccess file and redirecting to my holding page index.php?id=7: Options +FollowSymlinks RewriteCond %{REQUEST_URI} !/index.php RewriteCond %{REQUEST_URI} !/assets/ RewriteRule $ /index.php?id=7 [R=307,L] This is working for pretty much all pages, but, changing index.php?id=7 to another number id=6 for example still shows the page with no redirect. Any help welcome...

    Read the article

  • How do I get google to see keywords on a one page web application site?

    - by David
    I'm going to have to link to the web site to explain this, http://www.diagram.ly, it's a free service, so I hope this doesn't break advertising rules. Basically, it's a one page web application, I don't want to create a web site for it. Some background text loads and if JavaScript is enabled, the web application itself then loads. The problem is that Google only seems to be picking up the title of the page and the text on the footer, so the site only appear on Google search for very limited text (based on the title and meta description mostly). I was hoping that search engines would pick up on the background text and index that. The text is factual, not keyword stuffed. Yahoo seems to pick up the text, just not Google. Does anyone have any experience of how Google would view such a site and where I could put the text for a better result? Edit I should mention that Google Webmaster Tools lists the site keywords as "Component, diagramly, feed, mxgraph, share and twitter". Basically the footer and little else.

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >