Search Results

Search found 9728 results on 390 pages for 'zee pro'.

Page 116/390 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Internet Explorer and margins

    - by Hailwood
    Hi there. I have some pretty simple html which is meant to make a layout as below. To push the tabs down from the userbar I am using margin-top: 35px; However in internet explorer the tabs are completly misaligned(the top of the tabs is where the bottom should be). So I need to use margin-top: -50px; for internet explorer. Why is this and how can I fix it without using a ie specific stylesheet <div id="pageHead"> <div id="userBar"> <span class="bold">Hi Matthew Hailwood | <a href="#">Logout</a> </div> <a href="http://localhost/buzz/" id="pageLogo"></a> <div id="pageTabs" class="clearfix"> <ul> <li><a href="http://localhost/buzzil/templates">Templates</a></li> <li><a href="http://localhost/buzzil/messaging">Messaging</a></li> <li><a href="http://localhost/buzzil/contacts">Contacts</a></li> </ul> </div> </div> With the css being #pageHead { height: 100px; } #pageLogo { float: left; width: 149px; height: 77px; margin-top: 11px; background: transparent url('../images/logo.png') no-repeat; } #userBar { text-align: right; color: #fff; margin-top: 10px; } #userBar a:link, #userBar a:visited, #userBar a:active { font-weight: normal; color: #E0B343; text-decoration: none; } .clearfix:after { content: "."; display: block; clear: both; visibility: hidden; line-height: 0; height: 0; } .clearfix { display: inline-block; } html[xmlns] .clearfix { display: block; } * html .clearfix { height: 1%; } #pageTabs { float: right; margin-top: 35px; } #pageTabs ul { position: relative; width: 100%; list-style: none; margin: 0; padding: 0; border-left: 1px solid #000; } #pageTabs ul li { float: right; background: url(../images/tabsBg.png) no-repeat 0% 0%; border-left: 1px solid #000; margin-left: -1px; } #pageTabs ul li a:link, #pageTabs ul li a:visited, #pageTabs ul li a:active { color: #fff; background: url(../images/tabsBg.png) no-repeat 100% 0%; display: block; font-size: 14px; font-weight: bold; line-height: 42px; text-transform: uppercase; padding: 4px 32px; text-decoration: none; } #pageTabs ul li a:hover, #pageTabs ul li a:focus { text-decoration: underline; }

    Read the article

  • sku code as description in Google Analytics

    - by dreagan
    In the Google Analytics ecommerce tracing script you must provide for every item and SKU code. I have this code for every product I'm selling and up until now I have always provided it in the _addItem method. But when reviewing that data in the ecommerce module of Google Analytics, I have no real, no readable data about my SKU sales. I know what product has been sold, due to the product name I provide. But when clicking through to the SKU-level, I know nothing more, since all I can see there are SKU codes. Is it possible and wise to replace the SKU code with the following template? "product-name colour-name size-name" This way, it should still be a unique field, but more readable afterwards.

    Read the article

  • How to check SERP position correctly?

    - by Cengiz Frostclaw
    I wonder how do you check your website's SERP position for a certain query. I cannot directly go to Google and search, because it knows i'm looking for my site, and it shows it in the first position, but from another browser, it cannot be even in the first page. So how do you check for "average user" ? I use Tor browser for that, since it gives me a completely different IP, do you think is it safe? I mean, does it give useful information ?

    Read the article

  • Can a Mediawiki table be dynamically created using other Mediawiki pages?

    - by Ashimema
    OK, So I've got created a page on my wiki which contains just a single table listing various details about servers and customers. You can follow links for each customer name in the table to find additional details about said customer. What I want to know is; Can the information in the customers page (page B) be used to dynamically update the table (Page A). Is this something that the Semantic MediaWiki extension can accomplished? Running Mediawiki 1.16.2

    Read the article

  • How to get tens of millions of pages indexed by Google bot?

    - by Chris Adragna
    We are currently developing a site that currently has 8 million unique pages that will grow to about 20 million right away, and eventually to about 50 million or more. Before you criticize... Yes, it provides unique, useful content. We continually process raw data from public records and by doing some data scrubbing, entity rollups, and relationship mapping, we've been able to generate quality content, developing a site that's quite useful and also unique, in part due to the breadth of the data. It's PR is 0 (new domain, no links), and we're getting spidered at a rate of about 500 pages per day, putting us at about 30,000 pages indexed thus far. At this rate, it would take over 400 years to index all of our data. I have two questions: Is the rate of the indexing directly correlated to PR, and by that I mean is it correlated enough that by purchasing an old domain with good PR will get us to a workable indexing rate (in the neighborhood of 100,000 pages per day). Are there any SEO consultants who specialize in aiding the indexing process itself. We're otherwise doing very well with SEO, on-page especially, besides, the competition for our "long-tail" keyword phrases is pretty low, so our success hinges mostly on the number of pages indexed. Our main competitor has achieved approx 20MM pages indexed in just over one year's time, along with an Alexa 2000-ish ranking. Noteworthy qualities we have in place: page download speed is pretty good (250-500 ms) no errors (no 404 or 500 errors when getting spidered) we use Google webmaster tools and login daily friendly URLs in place I'm afraid to submit sitemaps. Some SEO community postings suggest a new site with millions of pages and no PR is suspicious. There is a Google video of Matt Cutts speaking of a staged on-boarding of large sites, too, in order to avoid increased scrutiny (at approx 2:30 in the video). Clickable site links deliver all pages, no more than four pages deep and typically no more than 250(-ish) internal links on a page. Anchor text for internal links is logical and adds relevance hierarchically to the data on the detail pages. We had previously set the crawl rate to the highest on webmaster tools (only about a page every two seconds, max). I recently turned it back to "let Google decide" which is what is advised.

    Read the article

  • Spam link text when searching for company directors' name

    - by Alex
    It was brought to my attention that if you search for the name of one of our directors (with the intent to find there profile page on our site) They come up as the first link in most search engines as you would expect but the link text is just pure spam. the three search string I have tested on Google, Bing, Ask, and Yahoo have all returned similar results. Here is a list of the search strings: Paolo rossi futex Mark rossi futex Marco rossi futex Dan Goldberg futex Any idea what might be causing this I have searched through as much of the sites code as I can and cant find anything wrong with it.

    Read the article

  • Email links open in a new window [closed]

    - by Dan
    I'm asking this as an opinion question. How does everyone treat email links opening in a new window if their default email client is web based? This way? <a href="mailto:[email protected]">email me</a>. It will open fine for app based email clients but open in the same window for web based clients. This way? <a href="mailto:[email protected]" target="_blank">email me</a>. It will open in a new tab for web based email clients but open a blank tab. I cant really seem to find the best of both worlds. What does everyone else do?

    Read the article

  • Google is not treating two Austrailian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and domain). All schools in the state are issued a .qld.edu.au, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as "calvary christian college townsville" (if you check the sitelinks 2/6 are to a different domain). I've put a demotion in for this ages ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime this week. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au. We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Drupal + LDAP + Automatic

    - by WernerCD
    I've got Drupal 6 setup within a XAMPP test area. I have LDAP authentication, groups and data working against Active Directory. What I want... is since I'm on an intranet where users are logged in via user-names... is for automatic authentication, without the need to login via the website. If it's more difficult than its worth, it's no major hassle, but I'd like to know if it's possible that when my users visit our intranet they auto-magically authenticate with their already logged in Windows session. Ultimately, I may switch to IIS, but I do like having a portable, easy to backup/copy/test setup so for now I'm going to see if I can get this working in XAMPP.

    Read the article

  • google analytics reverse transaction not working with sales performance

    - by prasad maganti
    We have google analytics account and trying to do reverse transaction. We have created a transaction on one date and reverse transaction on some other date. After transaction if we do reverse transaction it disappears from transactions list. Is it the expected behavior or abnormal behavior? But, if we check the same order data in sales performance, the reverse transaction does not reflects on when we created the transaction, it reflecting on when we made reverse transaction date. It should not be do like this. The reverse transaction should affect the same date on when we made transaction date.

    Read the article

  • Default Wordpress site on IIS

    - by Mike
    We have multiple wordpress installations on our IIS7 (Windows Server 2008) Server as follows: http://www.example.com/site_one http://www.example.com/site_two http://www.example.com/site_three These all work properly. However we would like to configure it so that when users visit the root domain (http://www.example.com/) or any page underneath, ie: http://www.example.com/ http://www.example.com/page1 http://www.example.com/page2 They would actually see the corresponding pages for site_two: http://www.example.com/site_two/ http://www.example.com/site_two/page1 http://www.example.com/site_two/page2 How could we achieve this?

    Read the article

  • Tor and Google Analytics - how to track?

    - by Jeremy French
    I make a lot of use of Google Analytics - Google has reasonable tracking for location of users so I can tell where users come from. I know it is not 100% but it gives an idea. In the wake of Prism it is possible that more people will make use of networks such as tor for anonymous browsing. I have no problem with this, people can wear tin foil hats while browsing my site for all I care, but it will lead to more erroneous stats. Is there any way to flag traffic as coming from TOR, so I can filter location reports not to include it, and to get an idea of the percentage of traffic which does use it? Has anyone actually tried this?

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • # id - urls with id first display full page, then move to #id

    - by guisasso
    I've noticed this in the new version of chrome, and ie9 and 10. Some urls in a photo gallery have a #id tag as they are supposed to display a full view of a picture. Basically, a div in a lower position on the page has that #id that i call via a.com/1.html#id. This has never been an issue until lately, when i noticed a bit of a lag. The issue: The website loads normally, then the view moves to the #id as supposed, but with some lag sometimes, perhaps because of the high resolution of the picture, which is somewhat noticeable. Anyway to avoid this, or make it so the page would move to the correct #id even before fully loaded?

    Read the article

  • Cost effective way to provide static media content

    - by james
    I'd like to be able to deliver around 50MB of static content, either in about 30 individual files up to 10MB or grouped into 3 compressed files, around 5k to 20k times a day. Ideally I'd like to put some sort of very basic security around providing the data to ensure that a request is from the expected source, but if tossing the security for a big reduction in price is possible then it's an option. Does anyone have any suggestions other than what I've found: Google AppEngine is $0.12/GB & I believe has a file size limit of 10MB so I'd have to break the data up a bit. So a rough calculation would seem to be that this would cost me about $30 to $120 a day. Or I've seen something like what seems to be just public static content delivery with no type of logic capabilities like Usenet.nl at what I think calculates to about $0.025/GB which would cost me about $6 to $25 a day. Any idea if I'm going about these calculations right & if there might be a better option for just static content on a decently high volume delivery? Again some basic security would be great but if cost is greatly reduced without it then I'm up for that.

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Multiple sites redirected to one main site

    - by mattgcon
    I have a client who insists of having multiple website domains all being redirected to one main website domain. It is getting out of hand and his server has become conveluted and riddled with garbage because of it, not to mention confusing at times. Each of these domains that he is setting up has no content, they simply redirect the user to the main website domain. Is this practice of having multiple domains pointing to one main website common? And does anyone know where I can get information to give to this client to let him know this is a bad practice if it is a bad practice?

    Read the article

  • SEO best practices for a web feature that uses geolocation by IP Address

    - by Nick
    I'm working on a feature that tailors content based on a geo location lookup by IP address in order to provide information based on the general area where this visitor is from. I'm concerned that content will be interpreted as focused solely on the search engine spider's geo origin when it is indexed. Are there SEO best practices for geo location by ip address features? I appreciate any specific tips or words of wisdom.

    Read the article

  • Browser testing - Ideas on how to tackle it efficiently

    - by Rob
    Browser testing, the bane of any web designers life! Are there any tools and/or ways in which I can efficiently test different browsers on both Mac and PC? I not only want to test different browsers but also different versions of each browser. My current setup is on a Mac running VirtualBox with Windows Vista installed. This allows me to test both Mac and PC but the complications arise when trying to test different versions of browsers. Any one have any ideas?

    Read the article

  • Phishing alert but file never existed

    - by IMB
    I got an alert from Google Webmasters. They say the following file was present in my host: example.com/~jhostgop/identity.php I checked my files and it never existed at all. I've experience this problem in two different host and domains but the file never existed in my file system. It appears somebody out there is linking a random domain and it prefixes the link with /~jhostgop/identity.php. Now Google may have indexed them so now I get those false phishing alerts. Anyone experienced this? Is it possible to prevent this?

    Read the article

  • Google webmaster Index Status. Total Indexed=0

    - by hammad
    I previously changed my domain from www.visualstudiolearn.blogspot.com to www.visualstudiolearn.com... i had around 300 posts with the previous domain name and most of them where showing up on Google. Now that i have changed my domain name the index status shows total indexed as 0 and when i go to the advanced tab it says 304(not selected) and 217 blocked my robots. Im really depressed because of this situation. could you please help out???

    Read the article

  • Shifting from no-www to www and browsers' password storage

    - by user1444680
    I created a website having user-registration system and invited my friend to join it. I gave him the link of no-www version: http://mydomain.com but now after reading this and this, I want to shift to www.mydomain.com. But there's a problem. I saw that my browser is storing separate passwords for mydomain.com and www.mydomain.com. So in my friend's browser his password must have been stored for no-www. That means after I shift to www and next time he opens the login page, his browser wouldn't auto-fill the username and password fields and there will also be an extra entry (of no-www) in his browser's database of stored passwords. Can this be avoided? Can I do something that will convey to browsers that www.mydomain.com and mydomain.com are the same website? I already have a CNAME record for www pointing to mydomain.com but it seems that search engines consider CNAME as alias but browsers consider them as different websites, I don't know why.

    Read the article

  • Double vs Single Quotes in Chrome

    - by Rodrigo
    So when you want to embed google docs on a site you are given this chunk of code: <iframe width='500' height='300' frameborder='0' src='https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&output=html&widget=true'></iframe> This works fine on my site. If you edit the page, we run the new content through some filters to escape out stuff and make sure it is valid html. After the process, the link above gets converted to this: <iframe frameborder="0" height="300" src="https://docs.google.com/spreadsheet/pub?hl=en_US&amp;hl=en_US&amp;key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&amp;output=html&amp;widget=true" width="500"></iframe> This will work on every browser except for chrome. Chrome thinks I am running JS in the src. I narrowed it down to a combination of double quotes and escaped '&' symbols. If i revert one of those back to the original state, the iframe works. I work in ruby where ' and " have different behaviors. Is Chrome doing the same thing? Is there a way to turn that off?

    Read the article

  • Amazon Web Services Free Trial: query about get and put requests

    - by abel
    Amazon recently introduced a free tier for its cloud offering. I signed up for AWS and while signing up for the free tier of S3, i found this As part of AWS Free Usage Tier, you can get started with Amazon S3 for free. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of bandwidth in and 15GB of bandwidth out each month for one year. source:aws.amazon.com , emphasis mine. 20,000 GET requests & 2000 puts mean , 20,000 page views(max) and 2000 file uploads per month. Isn't that lower than what App Engine offers 43,200,000 requests per day.Am I missing some thing, please help.

    Read the article

  • At what visitor share do you stop supporting a given browser?

    - by adam
    I'm lead dev for a large website which has a higher than average percentage of IE6 users - about 4.4% of our audience. Our new version is going to make use of progressive enhancement - including transitions and effects as well as rounded corners, gradients, web fonts and other CSS techniques. Obviously there are cross-browser ways to achieve most of these things which require various amounts of work to implement. What I'm currently looking into - and what I'd like your experiences of - is how to decide at what point we draw the line between providing an enhanced experience vs just supporting the functionality. FYI, I believe that this question meets the six guidelines for great subjective questions as defined in the FAQ. I'm after answers detailing why and how, not too short, with constructive comments, experiences, facts and references. Thanks! Adam

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >