Search Results

Search found 17124 results on 685 pages for 'final cut pro'.

Page 145/685 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Google Webmaster Central tells me that robots is blocking access to the sitemap

    - by Gaia
    This is my robots.txt User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Sitemap: http://www.mydomain.org/sitemap.xml.gz But Google Webmaster Central tells me that robots is blocking access to the sitemap: We encountered an error while trying to access your Sitemap. Please ensure your Sitemap follows our guidelines and can be accessed at the location you provided and then resubmit: URL restricted by robots.txt I read that Google Webmaster Central caches robots.txt, but the file has been updated more than 10 hours ago.

    Read the article

  • Website not coming in Search engine results because of a term

    - by curiosity
    We have this site which is named Vialogues (Video+Discussion web based application). https://vialogues.com It has been around for sometime on the internet and we have also submitted sitemap.xml to search engines. However when we search on google or bing or yahoo using the keyword Vialogues, We are given results of the keyword dialogues and this message “showing results for dialogues, search instead for vialogues”. I am wondering if it's possible to list the site without the search engine suggesting “showing results for dialogues, search instead for vialogues”?

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • Are there specific legal issues for web developers working on sex dating sites?

    - by YumYumYum
    Say I have created many ordinary websites which are not related to any dating/sexual content. Are the rules and regulations for a developer the same when making a sex-related dating site? I'm talking about a site where people meet together and get to know each other, with the intent of having a sexual relationship (you know what I mean), also featuring webcam sex, but not explicitly a porno site. Do such sites have any special legal issues for developers compared with non-sexual/dating sites?

    Read the article

  • manage spam and catchalls on google apps?

    - by acidzombie24
    I use google apps as my email system for my website. I have a catch all which fowards mail to some_account which forwards mail to my peronal account bc its rare to receive mail on my sites. Problem is emails that are caught by the catch all ALWAYS goes to junk. Junk emails are never forwarded so i dont receive them in my main gmail account thus i dont receive emails sent to the wrong [email protected]. So i wrote a filter that on my catch_all_user to never send to spam, which worked as i get those emails. But on my main account those emails dont show up as spam/junk. How do i get it forwarding but still marked as spam so its in its own junk folder instead of mixed up in my real mail?

    Read the article

  • No date/time shown before my page in Google search results

    - by Ruut
    I know that by changing the meta description of my webpage, I can control the texts shown by Google in the search results. However I do not know how I can control the text shown just before the search results, for example the date when the page was last updated. Which meta tag to use to accomplish this? UPDATE: My webpage is automatically updated on a weekly basis on irregular intervals by a cronjob which makes changes to the MySQL database which holds the content of my webpages. So the question is what (meta) info to add to my page.

    Read the article

  • Nginx routing script for NodeJS and Wordpress

    - by Nilay Parikh
    We are moving blogs and site from wordpress to nodejs and ready to move into production. However I'm not able to figure it out how to implement routing from front server (Nginx) to NodeJS (prefered web instance) and if data not synced yet into NodeJS website than (404 will throw by NodeJS) fall back to (using reverse proxy) to Wordpress and serve page, during the transformation period. Q1. Is the approach good for the scenario, or anyone can suggest better approach? Q2. Should NodeJS treat itself as Reverse proxy (using bouncy : https://github.com/substack/bouncy or similar package) in event of fall back or shoud stick with Nginx to do so using fastcgi approch. Both NodeJS and Wordpress are on single server only, In first scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then reverse query wordpress and serve content second scenario, /if resource available than serve directly User -> Nginx -> NodeJS (8080) \if resource not available then 404 to Nginx and Nginx script fallback to Wordpress (FastCGI PHP) Later we have plan to phase out Wordpress and PHP from the server environment completely. I'd like to see any examples of Nginx or Varnish scripts and/or NodeJS scripts if you have for me to refer. Thanks.

    Read the article

  • Odd Search resaults

    - by Alex
    It was brought to my attention that if you search for the name of one of our directors (with the intent to find there profile page on our site) They come up as the first link in most search engines as you would expect but the link text is just pure spam. the three search string I have tested on Google, Bing, Ask, and Yahoo have all returned similar results. Here is a list of the search strings: Paolo rossi futex Mark rossi futex Marco rossi futex Dan Goldberg futex Any idea what might be causing this I have searched through as much of the sites code as I can and cant find anything wrong with it.

    Read the article

  • Are there any good reasons to intentionally serve a new web site in Quirks mode?

    - by wsanville
    I was a little surprised that Amazon's site doesn't specify a doctype, and is rendered in quirks mode. What could possibly be the reason for this? I understand what quirks mode is and why doctypes were introduced, but I can't understand why this would be intentionally left off. I guess it might simplify markup if they're trying to support ancient browsers, but isn't that like shooting yourself in the foot when it comes to modern browsers, especially when their site is so Javascript rich? Does this level the playing field when it comes to supporting really old browsers? Is there something else I'm missing?

    Read the article

  • How can I allow robots access to my sitemap, but prevent casual users from accessing it?

    - by morpheous
    I am storing my sitemaps in my web folder. I want web crawlers (Googlebot etc) to be able to access the file, but I dont necessarily want all and sundry to have access to it. For example, this site (superuser.com), has a site index - as specified by its robots.txt file (http://superuser.com/robots.txt). However, when you type http://superuser.com/sitemap.xml, you are directed to a 404 page. How can I implement the same thing on my website? I am running a LAMP website, also I am using a sitemap index file (so I have multiple site maps for the site). I would like to use the same mechanism to make them unavailable via a browser, as described above.

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • Sitemaps showing twice in Webmaster Tools

    - by Andrew Lott
    Within my Webmaster Tools account I'm able to able to view details about Sitemaps that are uploaded for websites I manage. For some reason each sitemap is listed twice with the exact same details under both "By me" and "All". Even sites that don't have a sitemap yet tell me I have 0 sitemaps, twice... I originally thought this might apply to sites that have multiple "Users & Site Owners" in Webmaster Tools, but it even happens for sites that only I manage. I've checked other user accounts to compare and this doesn't happen; they just get one set of tabs for By Me/All, not two. What could be causing this?

    Read the article

  • Installing an asp application on a dnn server

    - by Cody Henrichsen
    I created a registration db/web application in C# for some workshops. The organization requesting is hosted on a DotNetNuke server. What changes do I need to make to the web.config so it can run under the site. Currently when I try to go to a page it get an error: Server Error in '/' Application. Configuration Error Description: An error occurred during the processing of a configuration file required to service this request. Please review the specific error details below and modify your configuration file appropriately. Parser Error Message: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS.

    Read the article

  • sitelinks under google webmaster tool

    - by user18899
    www.google.com/webmasters/tools when I use google webmaster tool to check one of my websites, I have found that there are 37 links under sitelinks which do not belong to my websites at all. the sitelinks should be the inner pages of my website ! I have not used google webmaster tool for half year and I know these links are hacking results or attempts, I wonder Please tell me how to delete them and how to prevent this types of hacking.

    Read the article

  • Why my Google listing not showing up in maps.google.com search?

    - by Business Inelligence
    Why my Google local listing not showing up even if searched with exact title in maps.Google.com? Its showing mostly results from US when searched. My bi solution company map location listing comes up when searched with company name from Google web search, but not showing up in maps.Google. Is Google places or map listing different from Google local listing?. I have done my listing in Google local. If they are different is there any way to take the local listing to Google place or Google map also. My listing is a claimed one with 100 percent completed profile details.

    Read the article

  • My website dns_server_failure when using University Connection

    - by iMohammad
    My website used to open just fine in the past when I use my University connection, but now since I transfered my website to another hosting company this problem started to appear. Some times the website open and sometimes I get this alot!: Network Error (dns_server_failure) Your request could not be processed because an error occurred contacting the DNS server. The DNS server may be temporarily unavailable, or there could be a network problem. For assistance, contact your network support team. Do you have any idea? I checked using websites checking tools and my website was running just fine on any other connection , ADSL, 3g but except my University connection. Thanks in advance :) UPDATE: When I open my website using the Real IP Server it does open just fine. But with my domain nope. Also, even other websites that are hosted on my server cannot be opened too

    Read the article

  • Why does PHP create these log entries on every page access?

    - by HXCaine
    My server's error log (in cPanel) has an entry for every single time a PHP page is loaded that looks like this (where 'username' has been substituted for my actual username): [2012-06-09 09:02:07]: info: [usr/grp]: username/username cmd: /home/username/public_html/error.php php: /usr/local/php53/bin/php I know it's only an 'info' message but it's filling up my error log. It can't be related to the content of my PHP files because it even happens when I load a page with this as its content: <?php phpinfo(); ?> Any ideas on what it means or if I can get rid of it?

    Read the article

  • Do Spambots have access to unlimited IP addresses?

    - by Reg Gordon
    I have been attacked for weeks by the same spambot trying to brute force the login page. I have a login security module now installed on my Drupal 6 website and it bans on IP after x amount of attempts. It's been going on for ever and I have banned about 1000 IP addresses. Is there any point in me banning on IP due to the spambot having access to unlimited IP addresses or will they run out of them eventually?

    Read the article

  • Two different domains as one user session in Google Analytics

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • How to spread XML Sitemaps over several webservers behind AWS loadbalancer?

    - by Jurik
    We have a web portal with almost a million products and way more other urls. I wrote a script that checks database. If there is a new url needed or an old one update, this script will update/create the XML Sitemaps. But we have several servers behind the load balancer at our rented AWS space. Further this script checks database for each url if there was an update so that it updates the appropriate xml file too. My question is how to spread those XML Sitemaps over all webservers behind this AWS load balancer? Our approaches/ideas: we could just generate them on one server with a cron job and copy them to the other servers, but this could be difficult because of automatic raising numbers of servers and so on. we put them on our S3 - but this one is not avaible thru our domain, so I guess google will have a problem with it I let my script run on every webserver but change it in a way that it will generate each time all xml files if they do not exist. But then I would have conflicts with updated URLs in my database, where I saved timestamp of last changed value of every url Is there another - better - solution that I do not know? Are there any special services by amazon for such cases?

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • Duplicate content in Top Level Domain and country specific website

    - by Ando
    I have myproduct.com which is my master product page. For UK I also own myproduct.co.uk which is a copy of myproduct.com with some localized content: landing page, promotions, prices, and specific tags. But there is also duplicate content: myproduct.com/FAQs/ is the same as myproduct.co.uk/FAQs/ I don't want to do a redirect from myproduct.co.uk/FAQs/ to myproduct.com/FAQs/ as I don't want people to leave the localized website. The myproduct.com/FAQs/ is my "go-to" FAQ page and it's the most likely to be up to date - so I want this page to be indexed my search engines, where as I don't care about myproduct.co.uk/FAQs/ being indexed (unless indexing this page would increase my page rank :) ). What to do now to be SEO friendly & SEO optimal? Stop indexing of myproduct.co.uk/FAQs/ via robots.txt? Do some rel="alternate" hreflang="x" configuring on both /FAQs/ page? Something else?

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • Can I find out the number of searches on a given keyword, per state?

    - by Philippe
    I know that Google tells you how many times a certain keyword is used in a search. You can use the Google Keyword Tool for that. This tool also allows you to find out the number of "local" searches: this is the number of times a person from a given country searches for this keyword. My questions: can you also find out how many searches originate from a given American state ? In the Keyword Tool, I can only select countries, not states. Any other systems I can use to determine where people are searching for a given keyword?

    Read the article

  • "Search Friendly" domain names

    - by Ben
    We bought a few search friendly domain names for the CPA site that I manage. Each of the domains we bought has the name of a nearby city and the word cpa in front of, or behind the city name. The plan is to create a landing page for each of these domains with useful information about business filings, ect. specific to that city, as well as directions to our office from that city. The question is how to best utilize these new domains: Should each domain be set to a 301 redirect to mainsite.com/city ? Should each domain be it's own single page mini-site that links to mainsite.com ? What other options are there and what are the pros/cons? Remember the goal is to be more relevant in searches that use a nearby city name in their search for CPA/accounting services.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >