Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 81/216 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Tor and Google Analytics - how to track?

    - by Jeremy French
    I make a lot of use of Google Analytics - Google has reasonable tracking for location of users so I can tell where users come from. I know it is not 100% but it gives an idea. In the wake of Prism it is possible that more people will make use of networks such as tor for anonymous browsing. I have no problem with this, people can wear tin foil hats while browsing my site for all I care, but it will lead to more erroneous stats. Is there any way to flag traffic as coming from TOR, so I can filter location reports not to include it, and to get an idea of the percentage of traffic which does use it? Has anyone actually tried this?

    Read the article

  • How can I find unused/unapplied CSS rules in a stylesheet?

    - by liori
    Hello, I've got a huge CSS file and an HTML file. I'd like to find out which rules are not used while displaying a HTML file. Are there tools for this? The CSS file has evolved over few years and from what I know no one has ever removed anything from it--people just wrote new overriding rules again and again. EDIT: It was suggested to use Dust-Me Selectors or Chrome's Web Page Performance tool. But they both work on level of selectors, and not individual rules. I've got lots of cases where a rule inside a selector is always overridden--and this is what I mostly want to get rid of. For example: body { color: white; padding: 10em; } h1 { color: black; } p { color: black; } ... ul { color: black; } All the text in my HTML is inside some wrapper element, so it is never white. body's padding always works, so of course the whole body selector cannot be removed. And I'd like to get rid of such useless rules too. EDIT: And another case of useless rule: when it duplicates existing one without changing anything: a { margin-left: 5px; color: blue; } a:hover { margin-left: 5px; color: red; } I'd happily get rid of the second margin-left... again it seems to me that those tools does not find such things. Thank you,

    Read the article

  • sku code as description in Google Analytics

    - by dreagan
    In the Google Analytics ecommerce tracing script you must provide for every item and SKU code. I have this code for every product I'm selling and up until now I have always provided it in the _addItem method. But when reviewing that data in the ecommerce module of Google Analytics, I have no real, no readable data about my SKU sales. I know what product has been sold, due to the product name I provide. But when clicking through to the SKU-level, I know nothing more, since all I can see there are SKU codes. Is it possible and wise to replace the SKU code with the following template? "product-name colour-name size-name" This way, it should still be a unique field, but more readable afterwards.

    Read the article

  • Server overhead caused by bots?

    - by giuseppe
    I have one customer website causing overhead (http://www.modacalcio.it/en/by-kind/football-boots.html). With htop opened, I am trying navigate the website and the much load of the website is done by the ajax link being placed on the left side of the website. The website is hosted by a VPS with 3 proc and 2GB RAM, with enough hard with disk space. The real problem is that this website is new and not visited much. From the http-status module I am seeing that the overhead is caused by bots (Google bots, Bing bots, hrefs checker and so on). So I thought that's probably due to those spiders trying to crawl all those links at once - could this be causing this overhead? I have also put rel="nofollow" in those links, but this doesn't keep the bots away. Is there any way through code or Plesk to disable those links to those bots?

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • SSL and green address bar

    - by tinab
    I am new to SSL so can someone explain why my address bar turns green when I'm on certain sites beginning with https:// and sometimes it doesn't even though I know the site has SSL? Maybe these two nuances are not even related, but if I go to GoDaddy and order a new domain I notice their address bar is green the entire time I'm using the https:// protocol, but then I go to Victoria's Secret to place an order and even though it says https:// the address bar doesn't turn green.

    Read the article

  • TinyMCE autoresize plugin not works

    - by user31929
    I want to reproduce this simply behaviour : http://tinymcesupport.com/tutorials/autoresize-automatic-resize-plugin This is my init: <!-- TinyMCE --> <script type="text/javascript" src="js/jscripts/tiny_mce/tiny_mce.js"></script> <script type="text/javascript"> tinyMCE.init({ mode : "exact", elements : "pagina_testo_colonna1,pagina_testo_colonna2,pagina_testo_colonna3", theme : "advanced", plugins:"paste,autoresize", plugin_preview_width : "100%", width : "100%", theme_advanced_buttons1 : "pastetext,|,bold,italic,underline,strikethrough,|,bullist,numlist,|,indent,outdent,|,undo,redo,|,justifyleft,justifycenter,justifyright,justifyfull,|,link,unlink,|,charmap", theme_advanced_buttons2 : "", theme_advanced_buttons3 :"", theme_advanced_disable : "image,anchor,cleanup,help,code,hr,removeformat,sub,sup", theme_advanced_resizing : true, paste_text_use_dialog : true, relative_urls : false, remove_script_host : false }); </script> <!-- /TinyMCE --> i have added "autoresize" to plugins list but my editors not resize while i writing, they simply scroll. I have multiple editor in the same page. What's wrong with my code?

    Read the article

  • n00b needs some PHP syntax guidance [closed]

    - by Michael
    If you look at http://www.cruc.es/?paged=12/ and go to the bottom of the page you'll see the bottom navigation with the next and previous options. I've been able to make the page numbers work by changing page to paged= in the code. I don't know enough about PHP to get the previous/next options to work. Any advice would be appreciated and I've pasted the code below. Thank you: n00b if ( $query->found_posts > $query->query_vars["posts_per_page"] ) { echo '<ul class="paging">'; // Previous link? if ( $page > 1 ) { echo '<li class="previous"><a href="'.$baseURL.'/page/'.($page-1).'/'.$qs.'">previous</a></li>'; } // Loop through pages for ( $i=1; $i <= $query->max_num_pages; $i++ ) { // Current page or linked page? if ( $i == $page ) { echo '<li class="active">'.$i.'</li>'; } else { echo '<li><a href="'.$baseURL.'/?paged='.$i.'/'.$qs.'">'.$i.'</a></li>'; } } // Next link? if ( $page < $query->max_num_pages ) { echo '<li><a href="'.$baseURL.'/page/'.($page+1).'/'.$qs.'">next</a></li>'; } echo '</ul>'; }

    Read the article

  • Google doesn't seem to update the description or title of my homepage

    - by Dayson
    Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from its contents. So the description in the search results said, "Coming soon! Visit epicwhale.org for updates." It's been a few weeks since we launched our website. We've even created a sitemap and submitted it to google. In the google webmaster panel, the pages have been crawled and all the pages are appearing as expected on google, EXCEPT the homepage which is still not updated! The title and description of the homepage in google search results still says coming soon.. The website I am referring to is textmewidget.com and below are the images of the search result. Google: http://i.imgur.com/vAkJg.png I checked on bing too, but it appears to be fine there. Bing: http://i.imgur.com/Q8O6L.png All other pages seem to be indexed fine on google. I don't even have any crawl errors in my reports. So what seems to be the problem? I've already waited for 2 weeks. Thanks in advance!

    Read the article

  • How to handle non-existent subdirectories?

    - by Question Overflow
    I have a dynamic website with friendly URLs. Example: Instead of /user.php?id=123, I have /user/123 Instead of /index.php?category=fishes, I have /fishes But, how do I handle non-existent subdirectories such as /about/123? Currently it gives a 200 success instead of a 404 not found error. Is there a way to deal with non-existent subdirectories in Apache config and at the same time allow for friendly URLs? Or do I have to handle this individually for each PHP script?

    Read the article

  • 403 error on index file

    - by John L.
    When I try to access index.py in my server root through http://domain/, I get a 403 Forbidden error, but when I can access it through http://domain/index.py. In my server logs it says "Options ExecCGI is off in this directory: /var/www/index.py". However, my httpd.conf entry for that directory is the same as the ones for other directories, and getting to index.py works fine. My permissions are set to 755 for index.py. I also tried making a php file and naming it index.php, and it works from both domain/ and domain/index.php. Here is my httpd.conf entry: <Directory /var/www> Options Indexes Includes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all AddHandler cgi-script .cgi AddHandler cgi-script .pl AddHandler cgi-script .py Options +ExecCGI DirectoryIndex index.html index.php index.py </Directory> Thanks

    Read the article

  • Bad Bot blocking Revisited

    - by Tom
    I've read a lot about bad bot blocking, php scripts, .htaccess techniques, etc... Is this a valid method? Since .htacces can rewrite and send a bad bot a 403 deny or forward to something like spam poison, is it possible to Disallow a folder, then through .htaccess in that specific folder redirect to spampoison? Since Apache reads each .htaccess independently and follows specific instructions, then a bad bot not following robots.txt would just be redirected. Or anyone trying to access, /badbot/ or whatever I choose to call my trap folder. Thanks Tom

    Read the article

  • Spam link text when searching for company directors' name

    - by Alex
    It was brought to my attention that if you search for the name of one of our directors (with the intent to find there profile page on our site) They come up as the first link in most search engines as you would expect but the link text is just pure spam. the three search string I have tested on Google, Bing, Ask, and Yahoo have all returned similar results. Here is a list of the search strings: Paolo rossi futex Mark rossi futex Marco rossi futex Dan Goldberg futex Any idea what might be causing this I have searched through as much of the sites code as I can and cant find anything wrong with it.

    Read the article

  • What can I do about Hack Attempts

    - by Matt
    I have an ASP.net website hosted using the Ultidev Web Server Pro. Every day I get a steady stream of errors generated by my application where page requests were requested and denied. This is obviously someone/something trying to find any exploits on my website. Here is an example log: 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpmyadmin/index.php 28/08/2012 11:37:11 - File not Found:http://MyWebServer/phpMyAdmin/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/phpMyAdmin-2/index.php 28/08/2012 11:37:12 - File not Found:http://MyWebServer/php-my-admin/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.3/index.php 28/08/2012 11:37:13 - File not Found:http://MyWebServer/phpMyAdmin-2.2.6/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.1/index.php 28/08/2012 11:37:14 - File not Found:http://MyWebServer/phpMyAdmin-2.5.4/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc1/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-rc2/index.php 28/08/2012 11:37:15 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.5-pl1/index.php 28/08/2012 11:37:16 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc1/index.php 28/08/2012 11:37:17 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6-rc2/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.6/index.php 28/08/2012 11:37:18 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7/index.php 28/08/2012 11:37:19 - File not Found:http://MyWebServer/phpMyAdmin-2.5.7-pl1/index.php 28/08/2012 13:52:07 - File not Found:http://MyWebServer/admin/pma/translators.html Is this normal? Is there anything I can do to protect myself against this?

    Read the article

  • Preventing adult content in a forum

    - by John Doe
    I'm working on a forum that allows images attached to the posts and doesn't require registration. Thing is, I'd like to provide a work-safe navigation option in which the posts with porn images attached aren't shown. The ideas I've come up with are: Making the work-safe option the default and treating all posts with images attached as pornographic, and making them visible only if the user "unchecks" it. Making all posts with images attached not work-safe by default and changing their status to work-safe only after a moderator approved it. Only then they would be visible if the user has the "work-safe" option checked. Does anyone else have an idea? Also, how the big web services deal with this? (YouTube, CraigsList, even StackExchange). By the way, I don't think that "nudity detector" libraries are accurate and they give plenty of false positives and negatives. Thanks!

    Read the article

  • custom facebook connect image - Is it facebook's policy violation?

    - by Viruthagiri
    I was going to change facebook's default login button with my custom image like mashable I mean like this But I found a article which state its against facebook's policies Is it really a violation? If it is how come mashable using custom image? Can someone answer me? Update This is the exact image i would like to use. Facebook mentioned like this in this page. While you may scale the size to suit your needs, you may not modify the “f” logo in any other way (such as by changing the design or color). If you are unable to use the correct colour due to technical limitations, you may revert to black and white. So my sign in with facebook image violating facebook policy in anyway?

    Read the article

  • template for terms of condition for social media based website?

    - by Rubytastic
    Im looking for a template for a terms of usage text based on social media websites. Im actually a coder and not into the legal blabla in general. Ofcourse you could spend a thousand or 2 on a lawyer but just a 3/4 paper text shoulder;t be to hard to compile yourself with some help. Im not sure if this is the right spot to ask this question but I love stack overflow and none of the sites in stack exchange I could find matched better then this one. My first idea lets look at some social media websites and grab some of there text, rewrite it for own specific usage Are there templates on writing such document Same goes with a privacy policy actually.

    Read the article

  • What are some options and methods to link a contact form on WordPress to an existing form processing script?

    - by eirlymeyer
    I’m searching for the best way to link the outgoing/output data in a WordPress contact form plugin on a WordPress website to an existing MySQL database where a contact form is processed. Scenario: A new site (Site A) is being developed with a contact form. Site B (old site) uses a contact form script to process contact form leads through an existing legacy database and a ColdFusion application. The goal is to create site A with a new contact form to continue the same existing processes. Site A is to become the new Site B.

    Read the article

  • # id - urls with id first display full page, then move to #id

    - by guisasso
    I've noticed this in the new version of chrome, and ie9 and 10. Some urls in a photo gallery have a #id tag as they are supposed to display a full view of a picture. Basically, a div in a lower position on the page has that #id that i call via a.com/1.html#id. This has never been an issue until lately, when i noticed a bit of a lag. The issue: The website loads normally, then the view moves to the #id as supposed, but with some lag sometimes, perhaps because of the high resolution of the picture, which is somewhat noticeable. Anyway to avoid this, or make it so the page would move to the correct #id even before fully loaded?

    Read the article

  • Is there a ways to see granular per-visit data in Google Analytics?

    - by jakub.g
    I've started using Google Analytics very recently and I'm a bit lost with the sea of options (have been using Sitemeter before for some time). I've clicked through the service a lot but couldn't find what I'm accustomed to. I can see multitude of aggregated statistics in GA like: charts of browser share lists of country share lists of most visited URLs within the page and so on, but I would actually like to analyze each of the visits themselves. Something like: User X, France, Chrome, 7 pageviews between 18:01 and 18:15, entered on a.htm and exited on b.htm User Y, UK, Firefox, 1 pageview at 18:20, entered on c.htm Is there an easy way to see the reports in this way (perhaps by clicking a link to a separate page to see that particular session's stats)? How to navigate there if so?

    Read the article

  • php+mysql account management software?

    - by kdavis8
    I need an account system added to my website as a plugin to all of my HTML pages. The account system plugin needs to,register new users, log in current users, remove users who want to disconnect service, and manipulate all of these things via database on my web server. However, I do not know how to program in the PHP language or create and manipulate MySQL databases. I want a program that can create and manipulate the MySQL database automatically for my website and handle also PHP calls automatically. Are there any open source freeware programs out there that i can use? If so what are the names of these freeware?

    Read the article

  • Mysterious subdomains to my site indexed by Google

    - by shouren
    Stackers, We have an issue with strange subdomains pointing to (pages on) our site such as: www2.example.com 2.example.com anothersite.com.example.com A few things are perplexing: who created them? why they do that? why Google index them and made them appear in the search results when clicking them gets a 5xx error. how can we get rid of them? It seems some type of scams that hurt our site's free search and experience. Anyone had similar experience and knows the answers? Really appreciate it!

    Read the article

  • Should I be using WAI-ARIA in my HTML website builds?

    - by DBUK
    Should I be using WAI-ARIA in my website builds? Will it have any benefit? Is anyone adding 'role' to their code at the moment? The tab, link, checkbox and slider roles, plus many more, aren't available yet for HTML5. From looking at the list of what is available (see below), and what will be coming in the future, it looks like we might be applying roles to a huge amount of tags on our pages. Its not an issue especially if it brings benefit to users using readers etc Also, a side question, will search engines give any benefits to sites using WAI-ARIA? List of safe roles to use (I think) • role="article" • role="banner" • role="complementary" • role="contentinfo" • role="form" • role="heading" • role="main" • role="navigation" • role="search" Examples of usage: <header role="banner"></div>for a main header, banner only allowed once per page <header role="heading"></div>- for all headers after the main one <aside role="complementary"></aside> <form role="search"></form>

    Read the article

  • Google Analytics Funnel Step Regular Expression Not Working

    - by scoarescoare
    The first step in a funnel is going to have a dynamic ending fragment. Examples: http://mysite.com/invite/tickle-party http://mysite.com/invite/pajama-party http://mysite.com/invite/puppy-party To allow for such dynamism, I provided this url for step one: \invite(.*) My goals work but the funnel visualization report shows 0 for everything. I know this problem is due to the regex in the funnel step because I copied this entire goal except I replaced \invite(.*) with /invite/puppy-party When I hardcoded /invite/puppy-party the funnel worked as expected. Why is my funnel report not working with my original funnel step url parameter?

    Read the article

  • Google indexed my main site's content under subdomains

    - by Christie Angelwitch
    Google is indexing top level domain content as though it belongs on subdomains and I want to disable this. My site has wildcards enabled and we also have two subdomains with unique content. The first subdomain serves as a blog, the second one has only one page. Both have backlinks. Google has indexed content from the main site under the subdomains as well. Let's say that we have a page at example.com/page.html . The same page has also been indexed as subdomain.example.com/page.html as well and sometimes ranks better than the one located at the main site. The thing is that we never placed this content at the subdomain. I've thought about adding canonical tags at the subdomains to help with the duplicate content issue. How can I stop Google from indexing those pages? I don't even know how Google found those, since we never placed them at the subdomains.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >