Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 95/216 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • How to block FeedReader from fetching my content to their site?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. This forms some sort of content duplication to me, and I believe it does harm to my site. (Stackoverflow doesn't allow me to post image due to new account, pls click at the link above to see the picture, million thanks to you.) What can I do to prevent Feedreader to fetch my content to their site? I know robots.txt can perform such function, but I don't know how to do it. Any help would be much appreciated. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • E-commerce for custom orders/customer image upload

    - by ansarob
    We have a client that needs an e-commerce site set up pretty quickly. As I have no experience with e-commerce, I am looking for some guidance. Basically, the two big features we need are: Ability for customer to add info about order (example: the name the customer wants to be put on the customizable product they ordered) Ability for customer to upload photo of product to be customized I hope this makes sense. Right now I am really looking into Shopify, but I can't tell if it does everything we need. I know you can add order notes when checking out, but not sure about image upload (maybe it can be added as an app through the API?).

    Read the article

  • Assign subdomains to separate ports on web server

    - by Michael Frank
    I have set up an Abyss web server as a little experiment, and I want to know if it is possible to assign subdomains to different ports on the machine the web server is running on. I have a couple webUIs that I'd like to assign subdomains: 192.168.1.1:8000 becomes example.com/webui1/ 192.168.1.1:8001 becomes example.com/webui2/ The webUIs are available by accessing their ports via example.com:8000. I have tried using a reverse proxy, but it seems that this is only usable on one internal IP at a time. What other options do I have? Answer is good, but my current set up doesn't meet the requirements. Abyss Web Server X2 is required to use Virtual Hosts with Abyss.

    Read the article

  • how search engines see reciprical links

    - by sam
    reciprical links cancel each other out from a search engines point of view but what counts as a recpirical link .. Do reciprical links work on a site level or an individual page level ? If you where to say get an in bound link from site-a.com to mysite.com and then linked back from blog.yoursite.com would that be reciprical. Im aware google sees subdomains as different domains all together but in this instance is that the same ?

    Read the article

  • Is it possible to transfer a domain without a "gap" in Whois privacy protection?

    - by Guest
    I currently own several domains on which I am using a Whois privacy protection service to hide my personal details. In the near future, I would like to transfer some of these domains to a different registrar. It has been many years since I last performed domain transfers, so I am no longer knowledgeable about what it involves. However, I have read from several registrars that they ask their customers to disable Whois protection before effecting a domain transfer. Since there are several websites out there that publish archived versions of Whois information (and ask handsome money for the information to be hidden, of course), I would prefer to avoid having such a "gap" in my privacy protection. I figured that these websites would fetch Whois information mainly when a query is effected through their own website. However, I have found out that at least one of these sites had a copy of the Whois information for a new domain up on their site within hours after I registered it, so they must have some other source (of course I used a Google search to find that out, not their own site). What that tells me is that the time it takes for the domain transfers to go through would be more than enough for these rogue websites to cache my information. If my new registrar offers privacy protection for domains right from the point of registration as well, is there no way to transfer the domain between the two without reverting to my default Whois information in between?

    Read the article

  • Creating a Template Like System in cPanel

    - by clifgray
    I am creating a medium sized website using cPanel and their File Manager system and the majority of my pages are going to be the same with a different title and content section and I wanted to see if there is a system for making one general template file and then having all the other pages inherit from that file so all I have to do is have a content and title section and the rest of the links, headers, and whatnot can be changed throughout the site by just changing one file. Is there anything like this? I have used Jinaj2 in python and a few other systems for other server scripting languages but I am not sure how to implement it with cPanel.

    Read the article

  • Wordpress Multisite and Google Analytics in subfolders with mapped domains

    - by David
    I have a wordpress multisite with sub folders. The site's subfolders are mapped to domains, which are set to primary. I'm using the 'Google Analytics Multisite Async' code to track things. From what I can see it's tracking the sites fine (getting page hits for each site in google analytics) baring the original site in the Multisite which in content overview lists domains then the amount of traffic it's getting along with the orginal domains traffic. I don't want to track any other traffic for my orginal site than what goes to that. i.e. I don't want it tracking my other sites in multi-site. e.g. domain1.com is my orginal and I have lots of other sites in the multisite lets say domain2.com, domain3.com. In content overview in Analytics it's listing say domain2.com as content. Can I tell it to filter these out some how either in Analytics or within WordPress? Hopefully explained that clearly!

    Read the article

  • Fetch as Google error 403

    - by Bojan Vidanovic
    2 weeks ago, google cant access my website anymore, in webmaster tools i cant fetch any page, i always get error 403, and the website has been completly disapperard form the google search results. I cant figure how suddendly it cant see it anymore, i've checked .htaccess and there nothing that blocks google crawlers, and robots.txt is fine to. Anyway the site is accesibly normaly for users. Anyone had this problems? please help!

    Read the article

  • Should I add a "nofollow" attribute to download links, or disallow the URLs in robots.txt?

    - by Laurent
    I have a download link very similar to Opera's one - it's just a script that sends the file. It doesn't have an extension and there's no obvious way to tell that it's actually a download link. So since I don't want robots to crawl this link, do I need to add it to robots.txt or maybe add a "nofollow" attribute to it? I see that on Opera's website they didn't do either of this, so perhaps it's not necessary?

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". Please suggest?

    Read the article

  • How can I inform search engines that the usefulness of some content on my site has a limited shelf life?

    - by Tim Post
    Let's say that I run a forum dedicated to computer hardware. Naturally, people are going to ask questions like: What is the best laptop for running [os] Or What is the best video card for under [amount] These may be perfectly fine discussions, but the content loses usefulness over time. An answer to either question asked in 2007 might still be relevant in 2008, but definitely not in 2012. Is there a way that I can tell search engines that certain pages might not give visitors what they're looking for after a certain date, and perhaps hint to a page on my site that would provide good information? Perhaps something I could set in HTTP response headers, meta tags or even a site map?

    Read the article

  • Keep search engine from indexing specific content on your site

    - by Jimmy Chopps
    I've got a pretty weird scenario that I was wondering someone could help me out with. I recently created a blog site and noticed that search engines have been including the content of my footer in with the description. This presents a problem because my footer is basic a brief legal statement saying that the views are my own and don't represent the company I work for (and yada yada yada). So, basically, I need a way to prevent search engines from indexint that content in my footer or even my footer altogether. I've been looking back through some of my SEO books and searching through forums but this doesn't seem possible. Is it possible to keep search engines from indexing only certain content on a page? If it isn't possible, what alternatives are there to ensure this legal mumbo jumbo doesn't show up in the results?

    Read the article

  • What is the typical example of old school website design ?

    - by Pierre 303
    I want to build a website for a retro thing that was popular in the mid 90s (beginning of the commercial internet). So I want use old designs that was very popular at that time. The first thing that comes to my mind was those "under construction" animated gifs. People often put animated gifs everywhere. But also those awful repeating backgrounds. So yes, I want my website to look exactly like in the mid nineties ;) (please suggest practical and usable features, I guess an Java Applet menu would not work today, or saying on the bottom that this website is optimized for Netscape 3) EDIT: for those that wants to see the result: Retrology

    Read the article

  • For Google Rich Snippets: Is it 'harmful' to add the same `hreview-aggregate` microformat markup in several places?

    - by Oliver
    We are right now incorporating microformats markup for reviews into a client's web application and were wondering, whether it can be harmful to provide the same information on more than one page, e.g. on a dynamic search page and on the concrete product page. Does anybody have any experience with this? UPDATE: Actually, I was wondering, if Google showed a link to the page the review comes from, then how would they decide which of the sources of the review they would link to? Or don't they?

    Read the article

  • How to you prevent someone from getting a "new free trial period" by just creating a new login?

    - by Clay Nichols
    I am considering several options for the Trial version of our web app. The one I favor the most is the classic trial period. Of course, it's a LOT easier for someone to hack this system. They can defeat the various methods of fingerprinting the user, thusly: Browser cookie: Clear their cookies completely (or just for our site) or use a different device. Although Evercookie may help with the former. Email address: Create a new login (with a new email address) I'm going to monitor things for a while and just see how it goes. If it's a problem I'll consider requiring a credit card number matched to a name and billing zip code. Each such "ID constellation" would be considered one user. Someone could still have multiple credit card numbers but we could flag the same name+zipcode coming up again. Are there any better ways to do this?

    Read the article

  • Migrating from .co.uk to .uk [on hold]

    - by DD.
    I currently run the site https://www.example.co.uk and I'm considering migrating to https://www.example.uk to take advantage of the new shorter domain. When migrating the domain in Google Webmaster Tools...will all the authority from the old domain pass if I use 301 redirects? Does anyone know how all the reviews I have collected will get transferred to the new domain when using the AdWords seller ratings? Any other important issues to be wary of when migrating to the new domain?

    Read the article

  • Browser privacy improvement implications for websites

    - by phq
    On https://panopticlick.eff.org/ EFF let you test the number of uniquely identifying bits that the browser gives a website. Among these are HTTP header fields such as User-Agent, Accept, Accept-Language and later perhaps ETAG and If-Modified-Since. Also there is a lot of Information that javascript can get from the browser such as time-zone, screen resolution, complete list of fonts and plugins available. My first impression is, is all this information really usable/used on a majority of all websites? For example, how many sites does really send different content-types depending on the http accept header, or what fonts are available(I thought css had taken care of this)? Let's say of these headers/js functionality on day would be gone. Which ones would; never be noticed they were gone? impact user experience? impact server performance? immediately reimplemented because the Internet cannot work without it? Extra credit for differentiating between what can be done, what should be done and what is done in most situations.

    Read the article

  • Does the Adblock Plus extension prevent malicious code from downloading/executing? [closed]

    - by nctrnl
    Firefox and Chrome are my favourite browsers. The main reason is an extension called Adblock Plus. Basically, it blocks all the ad networks if you subscribe to one of the lists, like EasyList. Does it also protect against malicious ads on completely legitimate websites? For instance, several news websites use ad services that may allow a malicious user to insert "evil code". This makes the web very unsafe, especially for those who lack a serious antivirus product.

    Read the article

  • Moving large website to new CMS - URL changes

    - by herrherr
    Hi, I was wondering if you have any tipps on the following situation. I'm going to move a large website to a new Content Management System, here are some details on the site: online news magazine with roughly 3,000 articles domain age: 10 years online in the current form since May 2010 indexed pages: ~10.000 percent of search engine traffic: under 10% Unfortunately a custom-tailored CMS was used for the site. The performance, reliability and SEO capabilites have been really bad, so we are moving to a new and proven open source CMS. All the articles will be kept as they are, but the URL structure as well as the structure of the HTML templates will be changed. What I wanted to do now is to actually create 301 redirects for all articles from the old to the new schema, i.e: Old: www.example.com/en/html/news/detail/title-of-the-article/ New: www.example.com/category/title-of-article.html Is this a proven way to do something like this? If not, can you recommend a way that has worked for you? Thanks :)

    Read the article

  • When Googlebot sees a link, will it click it or navigate to it?

    - by FakeRainBrigand
    My site uses pushState and JSON data to display content. So, for example, this might appear on my page: <a href="/some/page">some page</a> The JavaScript then prevents the default action (following the link), and instead renders a view (using a different api, such as /getjson?some_page). $('[href]').click(function(){ history.pushState(...); handleURL(...); }); Assume my server will respond to requests at /some/page with a pre-rendered version. My questions are: will Googlebot receive the prerendered version, or allow JavaScript to instead invoke pushState, etc. if it doesn't make the direct request, will it wait for AJAX content to be loaded? does Googlebot implement pushState, so it will show the proper URL in search results?

    Read the article

  • 250 k 404 & 410 errors in Webmaster Tools. Bad backlinks?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these urls: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this url. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out urls. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these urls, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them? Any help will be higly appreciated,

    Read the article

  • Foolproof way to ensure Google news pulls the correct image for it's thumbnails?

    - by Anthony
    Google news results have an acompanying thumbnail next to articles that show up in the results. If google's crawler can't find a thumbnail to pull from our site, it uses its next best guess from another site, therefore linking the image to another site but still uses our headline. Example: Headline from Reuters, Image from Livemint: Our pages absolutely have images, they are not massive in file-size or dimensions, yet we are not having them pulled / crawled correctly. We have read up on the suggestions from google, and from others around the web and nothing is panning out. Has anyone had any experience where they can ensure google news will pull a thumbnail of our choosing?

    Read the article

  • SEO for replacing blog content, but keeping the same page URL

    - by cphill
    This might not have any major impact on the SEO, but basically I have random blog at this URL: http://example.com/blog (not a real URL), that I am removing and replacing with a company blog. I want to use the http://example.com/blog URL address, but I'm not sure how this would effect my SEO since this random blog content that I am removing has the example.com/blog URL prefix. Would I just add a 310 redirect for those old blog articles and leave the basic /blog URL without any redirects?

    Read the article

  • how to fix bad seo after being hacked

    - by mkprogramming
    About a year ago my wordpress website was hacked & some company decided to go nuts and actually do some "SEO" on the various links it created. Some of the pages would show up on google as "payday cash advance" instead of "portfolio". The issue has been resolved, but now as I've been doing GOOD seo, I've noticed (when checking backlinks) that there are TONS of links still on the internet (mostly broken sites now) that have links to my website with titles like: "get a loan today" and so on. Is there a way to remove these links ? Can I tell google to ignore them ? Help !

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >