Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 340/592 | < Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >

  • What is wrong with this HTML5 <address> element? [closed]

    - by binaryorganic
    <div id="header-container"> <address> <ul> <li>lorem ipsum</li> <li>(xxx) xxx-xxxx</li> </ul> </address> </div> And the CSS looks like this: #header-container address {float: right; margin-top: 25px;} When I load the page, it looks fine in Chrome & IE, but in Firefox it's ignoring the styling completely. When I view source in firefox it looks like above, but in Firebug it looks like this: <div id="header-container"> <address> </address> <ul> <li>lorem ipsum</li> <li>(xxx) xxx-xxxx</li> </ul> </div>

    Read the article

  • Is there a way to specify a CSS3 transition to occur only on :hover and when returning from hover, not on every event? [closed]

    - by Steve
    You could define the transition on the :hover event, which causes the browser to render only the effect into the hover and not out of it. a:hover { transition... } Using scale as an example, an image being scaled up would scale up on hover, but go straight back down without any transition when the cursor leaves the image. Or, you can set the transition on the element directly: a { transition... } Which by definition means any change that effects the scale of the element such as any developer set styles will work, but also the user zooming in and out the page, will cause there to be a transition. All the tutorials being spewed onto the internet at the moment point to using the latter, but wouldn't one consider this a usability flaw for anyone wanting to resize the page or taking any other action that may cause similar scenarios? Pages with large amounts of transitional hover scaling can go pretty mental if you zoom in and out of them.

    Read the article

  • Hide from google while developing

    - by user210757
    I will be building a (wordpress) web site. While I am developing, other team members will be pushing content. I'd like to have it hidden from google while under development. It will be hosted on godaddy. I have thought of not pointing the domain name to it until live and using "preview dns", or buying a static IP during development. Or hosting dev site in a sub-directory ("/dev/") until ready and then moving it up a level. If in the dev directory I'd add htaccess or robots.txt to not crawl. Is any of this a bad idea? Will google penalize for any of this - like search by IP and then associate that with the domain later on? Any better ideas?

    Read the article

  • How to make google get to know my domain name [closed]

    - by Milosh Belter
    Possible Duplicate: I cannot see my website in google I have a strange problem with my website. I have a website, let's say Abcdefg.com. Website is live for 2 months and google still doesn't know it. While searching for my domain name 'abcdefg', google displays results for similar phrase (abcdef) but not fot mine. How to make google get to know my domain name? Website and sitemaps have been submitted via Google Webmaster Tools.

    Read the article

  • What is the best taxonomy from Google's perspective?

    - by ZakGottlieb
    I was wondering what the best way is to structure a new website in Google's eyes. Currently, it contains two top-level categories (X & Y), and clicking a term under either one will result in the URL: www.nameofsite.com/X/X type term, or /Y/Y type term Technically, it is correct to group all "X type terms" under X and "Y type terms" under Y, but we could probably be more granular and break all articles into 5-6 top-level categories by breaking Y up into more specific categories. Given that the current URL structure will eventually result in 1000's of "X type terms" and "Y type terms" under just two top-level categories, would it be more advisable to have several of these, as suggested? Thank you in advance.

    Read the article

  • Using Subdomains for Newly Regional Company

    - by Taylord22
    The company I work for is expanding their business to new territories. I've got a lot of stabilization to do in the region/state where we're one of the most well known companies of our kind. Currently, we have 3 distinct product lines which are currently distinguished by 3 separate URLS. This is affecting the user flow of our site, so we'd like to clean it up before launching our products into the various regions. The business has decided to grow into 5 new states (one state consisting of one county only) — none of which will feature all 3 products. Our homebase state is the only one that will have all 3 products this year. My initial thought was to use subdomains to separate out the regions, that way we could use a canonical tag to stabilize the root domain (which would feature home state content, and support content for all regions), and remove us from potential duplicate content penalization. Our product content will be nearly identical across the regions for the first year. I second guessed myself by thinking that it was perhaps better to use a "[product].root/region" URL instead. And I'm currently stuck by wondering if it was not better to build out subdomains for products and regions...using one modifier or the other as a funnel/branding page into the other. For instance, user lands on "region.root.com" and sees exactly what products we offer in that region. Basically, a tailored landing page. Meanwhile the bulk of the product content would actually live under "product.root.com/region/page". My head is spinning. And while searching for similar questions I also bumped into reference of another tag meant to be used in some similar cases to mine. I feel like there's a lot of risks involved in this subdomain strategy, but I also can't help but see the benefits in the user flow.

    Read the article

  • Should I be concerned about hAtom tags on my blog?

    - by Sid
    I am using a theme that automatically adds hatom-entry, hatom-feed classes on my WordPress blog. I read that such tags/classes should be used for syndicated content. Anyway, then I ran a Rich Snippet Tool, which threw a "HAtomfeed" error. So I removed a "hfeed" div tag. Now, should I be concerned? Can this cause any problems? I still have a couple of these classes (listed below), and I just hope they do not effect my site's ranking. For now, these are the tags the Rich Snippet Tool has detected: hatom-feed hatom-entry: entry-title: entry-content: published: author: fn: person-name: url: Appreciate your help! Edit: All the content on this weblog is unique and written by me and others. Thought I'd share that.

    Read the article

  • Google Stats, how to get More info?

    - by Ant's
    I have created a blog very recently and i'm seeing my traffic and audience using Google Stats that is in built in google blogger. I have few question on google stats: 1) Is number of visitor shown by stat is rough or accurate? 2) How i do find whether people have visited my site or search engines? 3) Is google stats is best for beginners like me? or any other tool? Correct me if am wrong.

    Read the article

  • Quality web hosts not using c panel [closed]

    - by J4G
    Possible Duplicate: How to find web hosting that meets my requirements? I was an iPower web hosting user before I encountered major problems with their MySQL databases. I recently tried A Small Orange, whose GUI was not compelling, and I quickly learned to loathe c panel. I looked into using GoDaddy, but reviews of their service have been very negative. I was satisfied with iPower's control panel, so something similar would be appropriate. Can anyone recommend a quality web host that includes the following features? *Unlimited bandwidth (200gb or higher) *Unlimited storage (10gb or higher) *High up-time (preferably 95% or higher) *Does not use C panel or other difficult-to-use control panels *Supports multiple MySQL databases *Uses a recent version of PHPmyAdmin

    Read the article

  • Does any CMS natively support something like aquaBrowser/Vufind

    - by nus
    What I'm looking for is to set up a CMS website with a tag cloud/search system where when you click a tag it shows as a search filter, and you get a new tag cloud which only shows tags from the articles that have the primary tag. This should let the user easily include or exclude tags from the search. Also i would like to let the user filter on publication date by using a slider... Check this for an example of how aquabrowser does this (just search for something and play with the tag cloud): http://kidscatalog.columbuslibrary.org It's not perfectly what I want and I don't want to use flash, but I like the concept alot. If something like this does not exist and i might have to code it myself, which cms is most recommended (eg easy to extend, already has tag clouds and search filters,...)

    Read the article

  • Will uploading our .docx files on scribd and embedding the files on our website affect search engine rankings?

    - by user1439968
    We have prepared notes for university students which are on .docx format. And we want it to put on our website for viewing. We tried one option. Uploading the files on scribd and embedding it on our website for viewing on scribd viewer. Will making documents available on srcibd viewer on our website affect search engine rankings ? Will search engines treat it as duplicate content as those are already uploaded on scribd and we are embedding it on our website ? On scribd we have set the uploaded documents as 'private' though. And if it affects, can you suggest any suitable way to make .docx files to be viewed on our website that doesn't affect search engine rankings ?

    Read the article

  • Google Analytics API data for goals (funnels) doesn't match - how do they reconcile?

    - by bkgraham
    I have a Google Analytics account with a well-functioning funnel made up of 4 goals. I can query the API and get the data out, but it does not match the funnel report in Analytics. Without getting into specific values, I can give you an example with faked data. Here's how the funnel might look: Shopping Cart 100 > 100 > 20 80 (80%) Address Page 5 > 85 > 25 60 (71%) Payment Page 2 > 62 > 10 52 (84%) Checkout 1 > 53 (49.07% funnel conversion rate) Okay, so you would expect the API to output data something like this: goal1Starts goal1Completions goal1Abandons 100 80 20 goal2Starts goal2Completions goal2Abandons 85 60 25 goal3Starts goal3Completions goal3Abandons 62 52 10 goal4Starts goal4Completions goal4Abandons 53 53 0 Instead, it's different. Firstly, the abandons are associated with the following goal (so goal1 always has 0 abandons and goal4 always has 0 abandons. Okay, I can work with that. What's confusing is that the numbers are always a little different. The goal1Completions always match the report, as do the goal4Completions, but everything else is off by a small amount. Sometimes it's only 2 visits, other times it's off by 50. For the report above here's the kind of results I would tend to get: goal1Starts goal1Completions goal1Abandons 100 100 0 goal2Starts goal2Completions goal2Abandons 105 84 21 goal3Starts goal3Completions goal3Abandons 90 65 25 goal4Starts goal4Completions goal4Abandons 58 53 5 Here's what I know: Goal(n)Completions + Goal(n)Abandons = Goal(n)Starts Goal(n)Starts = Goal(n-1)Completions Goal(n)Starts - Goal(n-1)Completions != reported number entering at that level That third one is particularly disappointing. So, here's my question: What data do I need to pull from the API in order to recreate the counts in the Funnel report in Google Analytics? I don't need the pages exited to entering from - just the counts at every level.

    Read the article

  • What kind of redirect (301 or 302) for an email links tracker?

    - by MaxiWheat
    We are developing an email sending application ("à la" Mailchimp). Hyperlinks inserted by our users, in the emails they want to send, are replaced by a tracking URL on our application (https://ourdomain.com/trackingurl?blablabla) which then redirects the email reader to the original URL our users included in their emails. This allows us to record statistics about link clicks. Until now, we used 301 for those redirections, but we noticed that Google began indexing pages on our application which are in fact redirects to other domains. (The title and snippet in Google results are from the other domain, but the link in green is from our application). We took action by adding those urls to our robots.txt, but Google seems to take forever (months!) before removing them for its index and removing them by hand in Webmaster Tools would take a lot of time since there are lot. I would like to know which kind of HTTP redirect (301 or 302) is best suited for this kind of opreation ? Do you think switching to 302 redirects could improve this situation since we don't really want Google to index redirected links from our clients emails ?

    Read the article

  • Can I use Google AdSense within a banner rotator?

    - by Derfder
    I have 3 rotating banners on one 728x90 position on my website, so every time a page is loaded another banner is shown. One of these is AdSense. Is it legal, I mean is this strictly prohibited or not? Because, the code is basically stored in db table in my CMS module in Wordpress, so I guess it is OK. But I am asking to be sure. What is your personal experience? Does Google penalize sites with banner rotators?

    Read the article

  • Using INSERT / OUTPUT in a SQL Server Transaction

    Frequently I find myself in situations where I need to insert records into a table in a set-based operation wrapped inside of a transaction where secondarily, and within the same transaction, I spawn-off subsequent inserts into related tables where I need to pass-in key values that were the outcome of the initial INSERT command. Thanks to a Transact/SQL enhancement in SQL Server, this just became much easier and can be done in a single statement... WITHOUT A TRIGGER! Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • How can I make these Google Analytics numbers add up? (Frequency & Recency)

    - by Joe
    Here's a screen shot from Google Analytics. It's my last months traffic, and this is the 'Frequency & Recency' tab. So I believe that if I add up all the numbers under 'visits' I get 11,432, which is right, and if I add up all the numbers under 'pageviews' I get 14,785 and that's right as well. But, let's take the last line - the last line appears to say that 71 people, visited more than 51 times each, and they viewed a total of 243 pages between them - that doesn't seam to make any sense - did they view 9% of a page each time? So that's clearly wrong - what's the error in my calculation?

    Read the article

  • Wordpress theme usage rights with GPLv2

    - by user3177012
    I've been searching for a great looking wordpress theme to use on a small magazine website idea that I had and I've just found one that would be ideal, with lots of blank spaces specifically designed for adverts - But then when I came to download it there was a notice: License: GPLv2 or later. Type: Non-Commercial Does this mean that you can use the theme but not use the advert space? What are the limitations?

    Read the article

  • .htaccess redirect question

    - by user473056
    Hi, I'm trying to set up my .htaccess file to take the displayed link and route it to the destination link as below Displayed Link http://www.my-website.com/click-4559226-10388358?url=https%3A%2F%2Fdestination-website2.com%2FItem.php%3Fid%3D44350396%26sld%3DA6D7A632-821E-4b78-ACD0-147658B77BD6 Destination Link http://www.destination-website.com/click-4559226-10388358?url=https%3A%2F%2Fdestination-website2.com%2FItem.php%3Fid%3D44350396%26sld%3DA6D7A632-821E-4b78-ACD0-147658B77BD6 Effectively, all that changes is the first url (http://www.my-website.com) everything after that is the same. Is this possible and could someone briefly explain how I would go about it? * Just to be clear, I dont want to redirect everything from my-webiste.com. Just links that start http://www.my-website.com/click-4559226-10388358

    Read the article

  • SimpleViewer + lightbox

    - by singles
    Is it possible to integrate any kind of Lightbox with SimpleViewer? But I don't want to display SimpleViewer in Lightbox. I want to Lightbox show when I click on one of the images in SimpleViewer. Does anyone tried that with success? EDIT I have a SimpleViewer page now. I just want to bind handler to clicking an image (as normally in HTML based pages), fetch big image url and show that image (not SimpleViewer!) in Ligthbox/ThickBox/FancyBox etc.

    Read the article

  • which status to put for temporarily inactive page

    - by aji
    I was wondering if someone could help me how to manage temporarily inactive website in regards of SEO and search engine. the case is i managed a big ecommerce site, and sometime i need to put down page(s). could be days, could be weeks, could be months, and it depends on our vendor. if my visitors land on the page that been temporarily inactive then i can give them a message that the vendor they looking for is not available at this time and he can check back later OR check another vendor with similar products, but how do i send my message to search engine robots? if i use 301 status and forward URL page to another similar products, then the chance that the current URL being deindex is huge while i still want to use that URL for the future if my vendor want to re-join. any advise will highly appriciated

    Read the article

  • Facebook likes reset after moving to HTTPS

    - by aarondicks
    I've got a question regarding the Facebook like button. We worked on a piece recently that embeds a number of social share buttons (please see the source code below). When the piece was released, it was on HTTP, and received over 2k likes (the URL 'slug' hasn't changed at all). The site was recently migrated to permanent-on HTTPS, and the like data has been reset, and we've been left with 50 new, recent likes. If you see in the source code, the URL is set explicitly to like the HTTP version, which I believe to be correct. Can anyone help me work out what's happened here? Here's the HTML bit of the like button: <div class="fb-like" data-href="http://www.harveywatersofteners.co.uk/history-interior-design" data-layout="box_count" data-action="like" data-show-faces="false" data-share="false"></div>

    Read the article

  • When Google gives up recrawling 301 that led to 404?

    - by Easy Life
    I've transferred a domain and made a mistake in the redirects (the URL structure is identical). Even though they went to the new domain, the error caused a 404 when crawled by Google bot. 10 days after I saw and corrected my redirect mistake, and now the site should (hopefully) redirect to proper pages. Q1: The URLs of the 404 pages in the Webmaster Tools all bear the mistake and will never be available at the new site. I marked them as fixed in the tools. Do I need to do something about that, like 301 rewrite them with a condition to fix the error? Q2: Does Google bot attempt to recrawl 301 pages that pointed to a 404?

    Read the article

  • pages still show up in google search even after disallowed in robots.txt [duplicate]

    - by Jota Onasys
    This question already has an answer here: With Robots.txt disallow all, why was my site still getting traffic? 5 answers Why is it that some pages still show up in google search even though disallowed in robots.txt? Is the best solution here to remove the Disallow from Robots.txt and just add noindex, nofollow meta tag to those pages you want blocked? Or should I submit a request to Google directly to remove those pages?

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

  • .htaccess and browser caching

    - by Tim
    I ran across these suggested htaccess edits. Is this a good practice? Is this something I should implement on my wordpress site?: <IfModule mod_expires.c> ExpiresActive On ExpiresByType image/jpg "access plus 1 year" ExpiresByType image/jpeg "access plus 1 year" ExpiresByType image/gif "access plus 1 year" ExpiresByType image/png "access plus 1 year" ExpiresByType text/css "access plus 1 month" ExpiresByType application/pdf "access plus 1 month" ExpiresByType text/x-javascript "access plus 1 month" ExpiresByType application/x-shockwave-flash "access plus 1 month" ExpiresByType image/x-icon "access plus 1 year" ExpiresDefault "access plus 2 days" </IfModule>

    Read the article

< Previous Page | 336 337 338 339 340 341 342 343 344 345 346 347  | Next Page >