Search Results

Search found 14053 results on 563 pages for 'upk pro (knowledge pathwa'.

Page 89/563 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Why deny access to website for msnbot/bingbot?

    - by Quandary
    I've seen quite a lot of tutorials that recommend you to ban user agents containing the strings libwww-perl and msnbot. I understand why one would ban libwww-perl, it's mainly if not only used for hacking and spamming. But why are there so many sites recommending to ban msnbot/bingbot? Since it's a search engine, even if only with a marginal market share, I would except one would want this bot to crawl one's sites. What is it that msnbot does that makes people ban it?

    Read the article

  • In Joomla In htaccess REQUEST_URI is always returning index.php instead of SEF URL

    - by Saumil
    I have installed joomsef version 3.9.9 with the Joomla 1.5.25. Now I want to set https for some of the section of my site(e.g URI starts with /events/) while wanting rest of all urls on http.I am setting rules in .htaccess file but not getting output as expected. I am checking REQUEST_URI of the SEF urls but always getting index.php as URI. Here is my htaccess code. ########## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ########## End - Custom redirects # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} (/[^.]*|\.(php|html?|feed|pdf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ########## End - Joomla! core SEF Section # Here is my code e.g site url is http://mydomain.com/events RewriteCond %{REQUEST_URI} ^/(events)$ RewriteCond %{HTTPS} !ON RewriteRule (.*) https://%{REQUEST_HOST}%{REQUEST_URI}/$1 [L,R=301] I am not getting why REQUEST_URI is reffering index.php even though my url in address bar is like this http://mydomain.com/events . I am using JOOMSEF(Joomla extension for SEF URLS).If I am removing other rules from the htaccess file then joomla stops working. I am not getting a way to handle this as I am not expert.Please let me know if someone has passed through same situation and have solution or suggest some work around. Thanks

    Read the article

  • Showing schema.org aggregate rating in Google rich snippets

    - by nickh
    After adding schema.org microdata markup for reviews and aggregate ratings, I expected review and rating information to show up in rich snippets. Unfortunately, neither are being shown. Google's Structured Data Testing Tool finds the microdata, and there're no errors or warnings on the page. Any idea what's wrong with the microdata markup? Example 1: Live Page: http://www.shelflife.net/ljn-thundercats-series-3/bengali Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fljn-thundercats-series-3%2Fbengali Example 2: Live Page: http://www.shelflife.net/star-wars-mighty-muggs/asajj-ventress Google Test: http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.shelflife.net%2Fstar-wars-mighty-muggs%2Fasajj-ventress

    Read the article

  • SEOs: mobile version using AJAX: how to be properly read by crawlers?

    - by Olivier Pons
    Before anything else, I'd like to emphasize that I've already read this and this. Here's what I can do: First choice: create classical web version with all products in that page - http://www.myweb.com. create mobile web version with all products in the page and use jQuery Mobile to format all nicely but this may be long to (load + format), and may provide bad user experience - http://m.myweb.com. Second choice: create classical web version with all products in that page create mobile web version with almost nothing but a Web page showing wait, then download all products in the page using AJAX and use jQuery Mobile to format all nicely. Showing a wait, loading message gives far more time to do whatever I want and may provide better user experience - http://m.myweb.com. Question: if I choose the second solution, Google won't read anything on the mobile version (because all products will be downloaded in the page using AJAX), so it wont be properly read by crawlers. What / how shall I do?

    Read the article

  • How to go about unused CSS issues

    - by Saif Bechan
    I am running some speedtests on a blog, and I always get complaints about unused CSS. But this is not CSS that I never use, it is just not used on that particular page. Now I work in a structured way, but there still has to be some CSS in the file that will not be used, because you need it on another page. I do not think that using different CSS files on different pages is the way to go, I think you are much better off just creating one big file that can be cached. Now is there an elegant way of dealing with this, or do you just stick with it.

    Read the article

  • .XML Sitemaps and HTML Sitemaps Clarification

    - by MSchumacher
    I've got a website with about 170 pages and I want to create an effective Sitemap for it as it is long due. The website is internally linked very well but I still want to take advantage of creating a sitemap to allow SE's to crawl my site easier and to hopefully increase my websites PR. Though I am slightly confused to what I must do: Is it necessary to create a .xml sitemap AND a HTML Sitemap (both)? ... Because I've never worked with .xml ... where do I put this file once it's created? In the Root folder? So I assume that this sitemap.xml is ONLY to be read by spiders and NOT by website visitors. IE: No visitor on my website is going to visit the page sitemap.xml, am I correct? ... Hence why I should also create an HTML sitemap (sitemap.htm)?

    Read the article

  • What do I change in my domain name's DNS if I get a new Internet provider?

    - by johnny
    My company is about to get a new physical connection to the Internet, replacing the old provider. They, of course, are giving us the allotted IPs, Gateway, and DNS servers. My domain name is registered with a provider like Bluehost, GoDaddy, etc. When this new connection goes in, what do I change in the domain name providers DNS? I know to change what Host point to. That is my new IP address. But what about the name servers? I am confused because the company gave me IP addresses for the name servers, but at the provider is has a DNS server name like ns1.somedomain.com. Also, How do I update the Internet's DNS servers? Thanks.

    Read the article

  • Reduce HTTP Requests method for js and css

    - by Giberno
    Is these way can Reduce HTTP Requests? multiple javascript files with & symbol <script type="text/javascript" src="http://yui.yahooapis.com/combo?2.5.2/build/yahoo-dom-event/yahoo-dom-event.js &http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"> </script> multiple css files with @ import <style type="text/css"> @import url(css/style.css); @import url(css/custom.css); </style>

    Read the article

  • Need to sanity-check my .htaccess, especially Limit GET POST line for Google repellent

    - by jose
    I need a sanity check on this .htaccess (from a WordPress site) I inherited from a 5 month+ old site. What's the symptom? Google + Bing crawl, but don't index any of the pages. Let me be clear: I'm not mad about "not ranking high." I think something is (accidentally) rejecting search engine indexing. I am not an expert on .htaccess, but one part especially looked funny, the Limit GET POST line. Is it not weird to have both Allow and Deny all, with no parameters? Also, I've ruled out robots.txt, but if I were you I'd want to see it, so here it is: User-agent: * Crawl-delay: 30 And here's the more suspect .htaccess: # temp redirect wordpress content feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymousblog [R=302,NC,L] </IfModule> # temp redirect wordpress comment feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^comments/feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymous_comments [R=302,NC,L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> php_value memory_limit 32M Adding header by request: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="robots" content="noindex,nofollow" /> <meta name="description" content="buncha junk i've deleted." /> <meta name="keywords" content="keywords i've deleted" /> <meta name="viewport" content="width=device-width" />

    Read the article

  • Javascript autotab function not working on iPad or iPhone [migrated]

    - by freddy6
    I have this this piece of html code: <form name="postcode" method="post" onsubmit="return OnSubmitForm();"> <input class="postcode" maxlength="1" size="1" name="c" onKeyup="autotab(this, document.postcode.o)" /> <input class="postcode" maxlength="1" size="1" name="o" onKeyup="autotab(this, document.postcode.d)" /> <input class="postcode" maxlength="1" size="1" name="d" onKeyup="autotab(this, document.postcode.e)" /> <input class="postcode" maxlength="1" size="1" name="e" /> <br /> </form> which uses this javascript: <script> /* Auto tabbing script- By JavaScriptKit.com http://www.javascriptkit.com This credit MUST stay intact for use */ function autotab(original,destination){ if (original.getAttribute&&original.value.length==original.getAttribute("maxlength")) destination.focus() } </script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script src="js/scripts.js" type="text/javascript"></script> <script type="text/javascript"> function OnSubmitForm() { if(document.postcode.operation[0].checked == true) { document.postcode.action ="plans.php"; } else if(document.postcode.operation[1].checked == true) { document.postcode.action ="plans_gas.php"; } else if(document.postcode.operation[2].checked == true) { document.postcode.action ="plans_duel.php"; } return true; } </script> As soon a you enter in one character into one of the text boxes it automatically tabs across the the next text box. This works fine on a pc or mac and on safari and also in all other browsers. But when viewing the webpage on an iPad or iPhone (using safari) the auto tabbing function does not work. Any ideas on how to make the auto tab work on these mobile devices?

    Read the article

  • Is it fine to put different category of stuff in a single domain name?

    - by Fahad Uddin
    I own a website which is regarding startups and finance. I am looking forward to work on Wordpress programming in which I would be selling wordpress themes. I thought of buying a domain name for Wordpress website but it takes quite lot of time to setup a website and then do its SEO. Is it fine(in terms of SEO and professionalism) to put the Wordpress category inside my old domain like, Domain: www.startupsandfinance.com Wordpress domain, www.wp.startupsandfinance.com

    Read the article

  • Is is good or bad to have the email address of a domain's registrant on the same domain?

    - by Eric Nguyen
    Say I own domain abc.com. I think it's a bad idea to use [email protected] as the registrant's (myself) email address. This will cause problems when I need to transfer the domain to another registrar e.g. GoDaddy. The new registrar will then try to send email to [email protected] which is unlikely to function normally since the DNS settings are undergoing changes. So I believe it's best to use email address independent from the domains I own as the registrant's email address. (Isn't it the practice Google Apps is using?) Have I missed something here or am I right?

    Read the article

  • apache domain redirect to subfolder

    - by Dennis
    I have a hosting account with godaddy. Its a linux system running apache. The way they do their setup is your primary domain is the root folder. When you add a subdomain its in a subfolder of the root which sucks. I want to setup a subfolder structure to organize my domains.. I called godday support and they said to use redirects.. but did not know how to do that.. How its setup now: primary domain: www.domain.com / sub.domain.com /sub I want to create a directory structure and then redirect to each but only show www.domain.com in the url www.domain.com /domain/www sub.domain.com /domain/sub I tried using: RewriteEngine On RewriteCond %{HTTP_HOST} ^(www.)?domain.com$ RewriteRule ^(/)?$ domain/www [L] but it just changes the url to www.domain.com/domain/www Can this be done in htaccess?

    Read the article

  • Google webmaster showing duplicate meta descriptions for search directory

    - by Mike Flynn
    What is the best way to get rid of this error in Google Webmasters? Do I really need to add "- Page 2" at the end of the descripton? Page Description Kansas basketball tournaments posted by organizations and teams for youth, AAU, and NCAA certified e Pages /youth-basketball-tournaments/kansas /youth-basketball-tournaments/kansas?page=2 /youth-basketball-tournaments/kansas?page=3 /youth-basketball-tournaments/kansas?page=9

    Read the article

  • Interpretation of empty User-agent

    - by Amit Agrawal
    How should I interpret a empty User-agent? I have some custom analytics code and that code has to analyze only human traffic. I have got a working list of User-agents denoting human traffic, and bot traffic, but the empty User-agent is proving to be problematic. And I am getting lots of traffic with empty user agent - 10%. Additionally - I have crafted the human traffic versus bot traffic user agent list by analyzing my current logs. As such I might be missing a lot of entries in there. Is there a well maintained list of user agents denoting bot traffic, OR the inverse a list of user agents denoting human traffic?

    Read the article

  • Robots.txt practices with .htaccess redirections (inherits)

    - by Jayhal
    I have a question regarding how to write robots.txt files for many domains and subdomains with redirects in place. We have a hosting account that enacts primary and add-on domains. All of our domains and subdomains, including the primary domain, is redirected via htaccess 301s to their own subdirectories in the primary domain's root directory. I'm confused about how I would write the robots.txt for certain directories. First, I wanted to confirm I am right in understanding that for domains and subdomains, crawlers will look to the directory that acts as that urls root directory for the crawling rules(robots.txt). Also, that a directory will not be affected by a robots.txt present in their parent directory if the directory has its own domain/subdomain, and that url is the one being accessed by crawlers. (Am pretty sure, but I wanted to confirm I didnt have a fundamentally flawed understanding of robots.txt) In the original root directory on the account(where the primary domain was directed before htaccess was put in place) what should the robots.txt contain? When crawlers look to crawl our primary domain, will they look to the original root directory for the robots.txt or will they reference the file contained in the new subdirectory where all the primary domain's site files are located? If so, what should the root's robot.txt include if anything at all. Would I be right to include a simple 'disallow: /' for all agents, and then include more specific robots.txt files in each subdirectory with more specific instructions. Would that affect the crawling of the directory where the primary domain is now redirected? Any help is greatly appreciated, Thanks!

    Read the article

  • Sending Emails via Google SMTP - after some time quit working

    - by Chris
    on a website I use PHPMailer to send automated registration emails, etc and also a newsletter-tool (which loops through the emails and sends them one by one). Also, I configured in Gmail under Settings and confirmed @mydomain addresses, so I can send from @mydomain emails without the gmail address being displayed. Furthermore I authorized the website to send mails with this link: https://accounts.google.com/DisplayUnlockCaptcha Now, after 2 month where everything worked perfectly fine, suddenly users started not to receive emails anymore and most recently emails are not even being sent anymore. Also, I received many error messages like this: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.4.1 [email protected]: Recipient address rejected: Access Denied (state 13). When I check at this link: https://toolbox.googleapps.com/apps/checkmx/ It tells 2 none critical errors: Relayhost configuration detected. There SHOULD be a valid SPF record. So, the questions I would have were: does anybody have any hint why it stopped working, what the error messages mean? what to do to fix it? where do I set a SPF record (Cpanel?)? what is a relayhost and how to fix that? It is about 1000-1400 mails a day (gmail's limit is 2000). Also, what can I do wrong when setting up an SPF record? I've heard there are some testing tools for that. Thank you so much already in advance for your help!

    Read the article

  • Advice for moving from custom domain hosted on blogspot to new custom domain on hosting provider [closed]

    - by Chan
    Need some advice :) a) Facts: 1. Currently, we have a blog www.thebigbigsky.net hosted on blogspot.com. The site has been up for around a year and google had cached some of the posts. 2. The idea for setting up the blog was to promote/share articles/write-up about personal development and self-growth, and also to offer counseling services related to them. 3. There is another subdomain - chinese.thebigbigsky.net hosting similar blog posts but in Chinese. This is hosted on blogspot as well. b) What we want to do: 1. We did some research on the internet and understood that, to make a website more popular, it is better to have a domain name that is related to the content of the site. Hence, in our case, a domain name containing "personal development" would be more ideal and easier for people to remember. 2. Hence, we are thinking to move away from blogspot to a meaningful new domain (example: personaldevelopmentwithxyz.com) and move all contents there. The new domain will be hosted on a hosting provider e.g. hostgator.com c) Here are the steps we have in mind: 1. Register the new domain (example: personaldevelopmentwithxyz.com) 2. Get a hosting provider hostgator.com 3. Setup a new site based on Wordpress, import all contents from existing blog to this new site - personaldevelopmentwithxyz.com. 4. Switch current blog back to thebigbigsky.blogspot.com 5. Switch DNS of www.thebigbigsky.net to point to the new site on hostgator.com 6. On hostgator.com, setup permanent redirect 301 of www.thebigbigsky.net to point to personaldevelopmetnwithxyz.com by following the tutorial mentioned here - http://www.blogbloke.com/migrating-redirecting-blogger-wordpress-htaccess-apache-best-method/ Questions: 1. Will it have major impact on the current google pagerank or SEO of thebigbigsky.net? As the site is not that popular, we assumed that the impact would not be a major one. 2. Is the domain personaldevelopmentwithxyz.com considered too long? (For SEO etc) 3. Steps c.4 and c.5 above - Will this work for our case? 4. How about the subdomain - chinese.thebigbigsky.net? We would like keep it as separate blog with the domain e.g. chinese.personaldevelopmentwithxyz.com 5. We added "facebook comments" to the existing post on www.thebigtbigsky.net and would not want to lose it. Will the comments still be there after we moved to the new domain? 6. Any better idea to do it? :) Thank you very much! regards, -chan

    Read the article

  • Apache on Windows - splitting vHost logs

    - by Cylindric
    I have a Windows Server 2008 running Apache, and it will be hosting several virtual hosts. I'd rather not use the logrotate tool (|bin/logrotate), as it seems create significant extra overhead with all the processes. Is there a simple Windows alternative to get the log entries from a combined log file split into several per-site files? Preferably with custom output directories, but that is optional.

    Read the article

  • Link form correct, or, punishable by search engines?

    - by w0rldart
    I have the following dilemma with the links of a wordpress blog that I work with, I don't know if the way it creates the link to the images is ok or not so good. For example: Article URL: http://test.com/prima-de-riesgo/ Image URL belonging to the article: http://test.com/prima-de-riesgo/europa/ So what I'm worried about is the repeating "prima-de-riesgo" part. Should I, or shouldn't I? UPDATE Wow, I can't believe that you took test.com as for the real domain, hehe! Article URL: http://queaprendemoshoy.com/prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro/ Image URL belonging to the article: http://queaprendemoshoy.com/prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro/deuda-publica-eurozona/ So, as I mentioned... I'm worried that prima-de-riesgo-y-otras-graficas-interesantes-del-ano-2011-deuda-publica-pib-vs-empleo-y-precio-del-oro , the common factor for the article url and image url, can be considerate as duplicate content or anything that could be punishable by search engines

    Read the article

  • Should page contents same all time for SEO?

    - by Ahmet Kemal
    Hi, I have a frequently updated website. That's why page contents change frequently. I mean the items that are on 1'st page become on 2'nd page a day after. Similarly 192'nd page which is my last page becomes 193'rd page a day after. So Google finds different content on a specific page than its previous visit. Is it bad for SEO? My main page is http://www.yemeklog.com/home/ and as you see below page pagination pages are like http://www.yemeklog.com/home/10 or http://www.yemeklog.com/home/192 . Website content is about food receipes at all pages. What you think about it?

    Read the article

  • Referral traffic not appearing properly in Google Analytics

    - by Crashalot
    We have a partnership arrangement with another site where we pay them for users sent to us. However, they claim our referral numbers for them are lower than theirs by 50%. They are tracking clicks in Google Analytics (using events) while we are using visits in Google Analytics. Are we doing something wrong with our Google Analytics installation? <!-- Google Analytics BEGIN --> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-12345678-1']); _gaq.push(['_setDomainName', 'example.com']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> <!-- Google Analytics END -->

    Read the article

  • Google Webmaster Tools is showing duplicate URLs based on page title differences

    - by Praveen Reddy
    I have 700+ title tag duplicates showing in WMT. Every first link in that picture is as duplicate link of second one. I don't know from where the first link got indexed by Google when that link doesn't exist in the site. It's showing the title of every page as link. Original link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland. Duplicate link: http://www.sitename.com/job/407/Swedish-plus-Any-other-Nordic-Language-Customer-Service-Representative-Dublin-Ireland-Ireland. How can this happen? I have checked entire site I didn't find where the second version is linked. I have no images linked to with duplicated version of URL.

    Read the article

  • DNS lookup when using a CDN

    - by Steven Wu
    Using a CDN can vastly improve the load time of a website. I been thinking of using it to host all my external files like CSS, JS, Images, Videos etc. However I was thinking when linking to a CDN, wouldn't the browser have to use additional DNS lookup? So wouldn't this be counter productive? Or is the benefit to host every external files on a CDN out weighs the additional cost of a DNS lookup? What are your thoughts?

    Read the article

  • Should I use the same AddThis tag on multiple sites?

    - by ripper234
    I have an AddThis for one site: <script type="text/javascript" src="http://s7.addthis.com/js/250/addthis_widget.js#pubid=ripper234"> </script> Now I logged into AddThis and wanted to get my tag again, I saw it changed: <script type="text/javascript" src="http://s7.addthis.com/js/300/addthis_widget.js#pubid=ripper234"> </script> Should I use the same tag I got before, or the new tag? What's the difference? Is 250/300 the internal version number?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >