Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 134/391 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Bad Bot blocking Revisited

    - by Tom
    I've read a lot about bad bot blocking, php scripts, .htaccess techniques, etc... Is this a valid method? Since .htacces can rewrite and send a bad bot a 403 deny or forward to something like spam poison, is it possible to Disallow a folder, then through .htaccess in that specific folder redirect to spampoison? Since Apache reads each .htaccess independently and follows specific instructions, then a bad bot not following robots.txt would just be redirected. Or anyone trying to access, /badbot/ or whatever I choose to call my trap folder. Thanks Tom

    Read the article

  • How can I make a browser trust my SSL certificate when I request resources from an external server?

    - by William David Edwards
    I have installed an SSL certificate on one of my domains and it works perfectly, but on some pages I include a Google Font, which causes my certificate icon to change in: instead of: The reason, according to Google Chrome (translated with Google Translate): Your connection to xxxxxx is encrypted with 128-bit encryption. This page includes other resources which are not secure. These resources can be viewed by others while in transit and can be modified to fit. So how can I make the browser 'trust' my SSL certificate, even though I request an external resource from Google Fonts? And also, does it matter that I use links like these: <link rel='stylesheet' id='et-shortcodes-css-css' href='https://xxxxxx/wp-content/themes/Divi/epanel/shortcodes/css/shortcodes.css?ver=3.0' type='text/css' media='all' /> instead of <link rel='stylesheet' id='et-shortcodes-css-css' href='wp-content/themes/Divi/epanel/shortcodes/css/shortcodes.css?ver=3.0' type='text/css' media='all' /> Thanks!

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Dotted subdomain name or new domain?

    - by Catalin Ilinca
    I have a company website hosted at www.BRAND.com (where BRAND is a generic name). The company want to develop a "micro website" for one of their campaigns, named "Inspired By BRAND". I have two directions: inspired.by.BRAND.com - which I personally don't like too much. I don't know why but I don't recall any web address similar to this one subdomain.subdomain.domain.com. inspired.BRAND.com - which I this is best suited for it. Fewer dots and similar to "more friendly" addresses subdomain.domain.com. Any hints, guidelines, any thoughts is well appreciated. Thanks in advance

    Read the article

  • Valid robots.txt? [closed]

    - by psot
    I am waiting for Google to crawl my site and display the results in search. Is my robots.txt alright and will it let google, bing etc crawl my site? Thanks! User-agent: * Disallow: /cgi-bin/ Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/ Disallow: /build/ Disallow: /css/ Disallow: /trackback/ Disallow: /comments Disallow: /assets/graphics/ Disallow: /assets/visual/ Disallow: /category/*/* Disallow: */trackback Disallow: */feed Disallow: */comments Disallow: /*?* Disallow: /*? User-agent: Slurp Disallow: / User-agent: Baiduspider Disallow: / User-agent: ia_archiver Disallow: / User-agent: duggmirror Disallow: / User-agent: Yandex Disallow: / Sitemap: http://example.com/sitemap.xml.gz

    Read the article

  • How to block FeedReader from fetching my content to their site?

    - by Wei Kai
    As you all can see from the picture below, my site's content is duplicated by FeedReader and indexed at Google. When I clicked at the FeedReader link, it uses some sort of iFrame to draw content from my site live. This forms some sort of content duplication to me, and I believe it does harm to my site. (Stackoverflow doesn't allow me to post image due to new account, pls click at the link above to see the picture, million thanks to you.) What can I do to prevent Feedreader to fetch my content to their site? I know robots.txt can perform such function, but I don't know how to do it. Any help would be much appreciated. I have also highlighted this issue to FeedReader 2 days ago, but yet to get any reply from them.

    Read the article

  • Blog not even ranking for exact title match, after domain has been dropped twice [on hold]

    - by Akshay Hallur
    Consider a blog, related to blogging and SEO. The domain has been dropped (expired) 2 times before acquisition. The current owner is the 3rd owner of the domain since 5 months. Blog posts are not ranking, even for exact titles. Google+ or other shares will show up instead of the content. Some blog posts are not even indexed. Let us TAKE that it gets around 7 organic visits / day. Dropped domain, less likely used for spam (WayBack machine (2 Reframed drops) 3 captures since 2004, Don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What could be the reason for this? How can Google be told that ownership is changed and the domain is now spam-free? Would this domain be salvageble, or does this only change after relocating to another domain?

    Read the article

  • 302 Redirect causes garbage at end of Wordpress link in Facebook

    - by Joao
    When I try to link my Wordpress blog to Facebook, the url doesn't resolve properly. There's garbage appended at the end and Facebook is not able to retrieve information from the site. Happens in every page, post or main entry. Here's what happens: http://clarissarezende.com.br/ shows up in Facebook as http://clarissarezende.com.br/UPLcS/ (when copy/paste the link) and no information about the site shows up in FB. I'm using Wordpress 3.3.1 with ProPhoto 4. Recently I moved the DNS entry on my ISP. The blog is hosted at clarissarezende.com.br/public_html/blog2 and before the DNS would point to public_html and then I changed it to public_html/blog2. Note that I did not move any Wordpress files. Made the (I think) necessary changes all over Facebook, but still no dice... Any ideas on what can be happening?

    Read the article

  • WHMCS Fatal error: Out of memory while View Invoice PDF

    - by prakash
    I can log into WHMCS & can access everything I should be able to access, but if i try to click View PDF Invoice, the following error will occur, Fatal error: Out of memory (allocated 67633152) (tried to allocate 76 bytes) in /home/xxxx/public_html/whmcs/includes/classes/class.tcpdf.php on line 8419 I have already set the allocated Memory limit to 256MB, but the error still occurs. At that time of the error, the process memory is exceeding the allocation I set. I checked log file, and found the following errors: #2 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(8453): TCPDF->Image('/home/xxxxx/...', 20, 25, 75, 17.5816023739, 'PNG', '', '', false, 300, '', false, 8) #3 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(7881): TCPDF->ImagePngAlpha('/home/xxxxx/...', 20, 25, 337, 79, 75, 17.5816023739, 'PNG', '', '', false, 300, '', NULL) While I was investigating the issue above I also noticed the error condition pictured below:

    Read the article

  • Google sitemap HrefLang tag without the main site url

    - by Rashmi Pandit
    We have websites with multilingual content. e.g. http://www.example.com/about-us/ http://www.example.com/en-HK/about-us/ http://www.example.com/en-GB/about-us/ http://www.example.com/zn-CH/about-us/ We need to configure the hreflang tags in sitemap for Google to know that there are alternate links for the same pages in different languages. I know for the above example that my sitemap url tag would look like this: <url> <loc>http://www.example.com/about-us</loc> <xhtml:link rel="alternate" hreflang="en-GB" href="http://www.example.com/en-GB/about-us"/> <xhtml:link rel="alternate" hreflang="en-HK" href="http://www.example.com/en-HK/about-us"/> <xhtml:link rel="alternate" hreflang="zn-CH" href="http://www.example.com/zn-CH/about-us"/> <changefreq>daily</changefreq> <priority>0.8</priority> </url> However, if I don't have the main url but just the last three ones with en-HK, en-GB and zn-CH, then how should my url tag look? Should I just skip the loc tag and keep the three xhtml:link tags? Or can I specify any url in the loc tag and put the remaining two in xhtml:link tags? I am new to Google sitemaps. Any help is greatly appreciated. Thanks, Rashmi Edit: From the answer posted on http://stackoverflow.com/questions/18423624/sitemap-for-domain-with-multilanguage-site/18423803#18423803, for my example with sites in en-HK, en-GB and zn-CH, should there be three url tags, with each of them assigned to loc with the other two in xhtml:link?

    Read the article

  • Make blogger load faster

    - by Wladimir Ivanov
    all. I use blogger as a platform for electronic music blog. Because of the thematics of the blog I embed many iframes (Youtube & Soundcloud). Of course this makes the articles to load slow. Almost each article on this blog consists of some text and many iframes below. What should I do in this particular case in order to make the articles (pages) load faster. Is there any available solution or I should use some jquery like lazy load to load iframes once the scroller reaches them? Any help is greatly appreciated.

    Read the article

  • indexing and crawling

    - by ricky
    hello mate my site is dailytopup.com...earlier my site was indexed imediately i post anything but last month my website was crashed due to sever problem and i adont have back up at that time so i recover everything from cached copies but before doing that i remove old urls from the webmaster and then repost again.but after that my website is not indexed properly reaults in no optimsation.everytime i have to use fetch as google but this is not that effective..can you please tell where um lacking or what should i do now?

    Read the article

  • need advice for small css issue [migrated]

    - by JaPerk14
    I have a small design problem in my css, and I'd like to know if someone could check it out for me. The design problem is in the rollover effect of my horizontal navigation. There seems to be some sort of added margin or padding, but I'm having trouble finding the problem in the css. I will paste the code I'm using below, so you can see for yourself. You won't be able to see the problem until you rollover the navigation list items. HTML: <div class="Horiznav"> <ul> <li id="active"><a href="#">Link #1</a></li> <li><a href="#">Link #2</a></li> <li><a href="#">Link #3</a></li> <li><a href="#">Link #4</a></li> <li><a href="#">Link #5</a></li> </ul> </div> CSS: .Horiznav { background: #1F00CA; border-top: solid 1px #fff; border-bottom: solid 1px #fff; } .Horiznav ul { font-family: Arial, Helvetica, sans-serif; font-weight: bold; color: #fff; text-Align: center; margin: 0; padding-top: 5px; padding-bottom: 5px; } .Horiznav ul li { display: inline; } .Horiznav ul li a { padding-top: 5px; padding-bottom: 5px; color: #fff; text-decoration: none; border-right: 1px solid #fff; } .Horiznav ul li a:hover { background: #16008D; color: #fff; } #active a { border-left: 1px solid #fff; }

    Read the article

  • Extremely large spike in traffic on the 1st - 4th of every month from mobile browsers

    - by wsanville
    I've noticed that on the 1st - 4th of the recent months (since January), several sites I maintain are getting thousands of requests from mobile browsers, whereas throughout the rest of the month, the numbers are in the single or double digits. Has anybody else noticed this sort of behavior? I don't have the exact user agents logged, but my analysis software (WebTrends) reports the traffic as mostly iPhone/iPad/iPod, Android, and Blackberry.

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • Google reverse an analytic

    - by Dan
    I am confused about what code must be executed to reverse a google analytic. I have the following code pasted within a test page: <body onLoad=”function()”> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-25305776-3']); _gaq.push(['_trackPageview']); _gaq.push(['_addTrans', '11455', // order ID - required '-42.38', // total - required '-2.38', // tax '-15.00' // shipping ]); _gaq.push(['_addItem', '11455', // order ID - necessary to associate item with transaction 'Evan Turner Turningpoint™ Basketball Pants', // product name '25.00', // unit price - required '-1' // quantity - required ]); _gaq.push(['_trackTrans']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Is this correct? Thanks!

    Read the article

  • Google Analytics show zero for "Search Engine Optimizations" graph

    - by Saeed Neamati
    In Google Analytics new design, there is an area related to the queries and impressions related to your site. You can get there by following Traffic Sources = Search Engine Optimization = Queries. However, it now shows zero for the "Site Usage" graph, at the top section, while other areas of Google Analytics definitely show that site has visitors and has been used. No matter how much I search, I can't find the source of the problem. Does anyone know where the problem might be?

    Read the article

  • How to force user to use subdomain?

    - by David Stockinger
    I am hosting a webshop with OpenCart and its current URL is e.g. http://mydomain.com/shop/ I have created two subdomains ( http://pg.mydomain.com/ and http://shop.mydomain.com/ ) and both subdomains are already working as they should. However, can I restrict direct access to mydomain.com/shop/ while leaving all the files (index.php, etc.) there? Since both subdomains are pointing to http://mydomain.com/shop/, I thought this would restrict all access. So in the end, I would like my two shops to be accessable through http://pg.mydomain.com/ and http://shop.mydomain.com/, but not http://mydomain.com/shop/ while leaving all the files in http://mydomain.com/shop/.

    Read the article

  • Fetch as Google error 403

    - by Bojan Vidanovic
    2 weeks ago, google cant access my website anymore, in webmaster tools i cant fetch any page, i always get error 403, and the website has been completly disapperard form the google search results. I cant figure how suddendly it cant see it anymore, i've checked .htaccess and there nothing that blocks google crawlers, and robots.txt is fine to. Anyway the site is accesibly normaly for users. Anyone had this problems? please help!

    Read the article

  • why the difference in google search result using script for search and using a browser for search

    - by Jayapal Chandran
    I wrote a code to find the position in google search result for a search keyword. I also did the same with the browser. Both the results are different. Let me explain in detail here. I have a website and i wanted to know on which page number my domain appears for a search string. Like when i search for 'code snippets' i wanted to find in google search on which page number a certain domain appears. I wrote a php code to search page by page starting from page 1 to page n. I did the same task using a browser. The script returned page 4 and when browsed i can see the domain appearing in second page. here is the search string i use in my code. /search?hl=en&output=search&sclient=psy-ab&q=code+snippets&start=0&btnG= and for each request i change the start=0 to start=1, start=2, etc... and in the response i will check whether my domain appears in it. any idea for this different in search results?

    Read the article

  • Google analytics: how many visitors have visited n times?

    - by Riley
    I'm trying to guess how many loyal users I have by counting the number of people that have visited the site 10 times. How can I answer this question with Google Analytics? "Visitor Loyalty" is a tempting answer, but the label for loyalty is "Visits that were the visitor's nth visit," and I want something more like "Visitors that visited n times." For example, we have 40 visits in the "51-100" visit range, but I think that could be a single user who visited 91 times. Or two users who visited 71 times each. The whole chart makes a good logic puzzle (I wonder if there's a unique solution) but doesn't easily answer the question I have.

    Read the article

  • How to get httpd to forward to multiple tomcats for different urls, including / ?

    - by Nick Foote
    Ok So I've got multiple tomcat instances setup on several AJP ports, I also have Apache httpd listening on port 8090 (cos I've got another app already using 8080 at the moment). I've successfully mapped urls such as mydomain.com:8090/demo and mydomain.com:8090/preprod to their respective tomcat instances using Jk Mount and the following vhosts config; <VirtualHost *:8090> JkMount /preprod* preprod JkMount /demo* demo </VirtualHost> But I also want the "root" address to map to another tomcat instance, what will become live/production, ie I want mydomain.com:8090/ to map a 3rd tomcat instance. At the moment nothing happens or changes if I just add to the above config a line; JkMount /* rootwar if I browse to mydomain.com:8090 I just get the same boring apache httpd landing page letting me know its running (ie index.html in httpd/htdocs) Is it possible to use JkMount to redirect the "root" address stuff to a tomcat instance? I can see that a rule like /* will also match URLs like mydomain.com/preprod but I was hoping the rules are applied in order so if /* appears at the end it effectively would be a "if its not one of the other environments, then direct to root/production" Just to be clear I'm trying to setup the following; mydomain.com:8090/preprod --> myApp running in tomcat1 mydomain.com:8090/demo --> myApp running in tomcat2 mydomain.com:8090 --> myApp running in tomcat3

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • Guiding Management to the Correct Decision

    - by Blumer
    My supervisor (also a developer) and I have a running joke about writing a book called "Managing From Beneath: Subversively Guiding Management to the Right Decision" and including a number of "techniques" we've developed for helping those who make the decisions to make the right ones. So far, we've got (cynicism warning!): BIC It! BIC stands for "Bury In Committee." When a bad idea comes up that someone wants to champion, we try to get it deferred to a committee for input. Typically it will either get killed outright (especially if other members of the committee are competing for you as a resource), or it will be hung up long enough that the proponent forgets about it. Smart, Stupid, or Expensive? When someone gets a visionary idea, offer them three ways to do it: a smart way, a stupid way, and an expensive way. The hope is that you've at least got a 2/3 shot of not having to do it the way that makes a piece of your soul die. All-Pro. It's a preemptive pro/con list in which you get into the mind of the (pr)opponent and think what would be cons against doing it your way. Twist them into pros and present them in your pro list before they have a chance to present them as cons. Dependicitis. Link pending decisions together, ideally with the proponent's pet project as the final link in the chain. Use this leverage to force action on those that have been put off. Preemptive Acceptance. Sometimes it's clear that management is going to go a particular direction regardless of advice to the contrary, and it's time to make the best of it. Take the opportunity to get something else you need, though. Approach the sponsor out of the blue and take the first step: "You know, I've been thinking about it, and while it's not the route I would advise, as long as we can get the schedule and budget for Project Awesome loosened up, I can work some magic to make your project fly." So ... what techniques have you come up with to try to head off the problem projects or make the best of what may come?

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >