Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 142/391 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Need to redirect Wordpress category archives

    - by Scott
    I recently changed my Wordpress category structure a bit, changing some of the names and placing some under different parent categories. I don't use category name in my post URLs, so that's not a problem. But my category archive pages are indexed and have page rank I don't want to lose. So I need to redirect: "/category/old_cat_name" to "/category/new_cat_name". Or in some cases to /new_cat_name/new_sub_cat. I gather that I can't do this though the WP Redirection plugin and that I have to modify my .htaccess. Can someone show me what lines to add there--or is there another better way to do this? Thanks.

    Read the article

  • Slashdotted web site seeks new home

    - by Arthur Edelstein
    I am maintaining a website that contains mostly simple html (just a little php). Normally the site receives only 4000 hits per month, but it was recently slashdotted by the New York Times (30,000 visitors and 30 GB in a day) and the web host provider (bluehost) throttled the CPU in response. This slowed down the website considerably. What web host providers would offer a more scalable solution? Ideally I would like a high-quality host that charges by the GB and can handle bandwidth to expand during sudden slashdotting episodes without a reduction in performance.

    Read the article

  • OpenSearchDescriptions good or bad signal in Google's eyes?

    - by JeremyB
    I noticed a site using this tag: <link rel="search" type="application/opensearchdescription+xml" title="XXXXXXXXX" href="http://www.XXXXXXXXXX.com/api/opensearch" /> As I understand it (based on http://www.opensearch.org/Home), this tag is a way of describing search results (so you use it on pages which contain search results) to make it easier for other search engines to understand and use your results. Given that Matt Cutts has said Google generally frowns on "search results within search results" is using this tag a bad idea on a page that you hope to achieve a good ranking in Google?

    Read the article

  • I used a 301 Permanent Redirect to a 3rd party site by mistake! Can I stop the redirection?

    - by Dees
    Oh Noes! I've been parking a domain name for a friend/client of mine on my hosting provider (Dreamhost, FWIW) for a while, and they eventually asked me to redirect their domain to a 3rd party website which is currently featuring some relevant promotional content. Once this period ends, we will probably go ahead and set up a proper website for the domain on my hosting account. I used Dreamhost's "redirect" hosting option in their domain configuration panel, not realizing that it would implement a 301 Permanent redirect, or what the implications were. Now it seems that for any client that has visited the site anytime recently, the 301 redirect is still cached/in effect, although I have changed the domain settings back to regular Dreamhost full site hosting. It seems that the only thing that can be done is to wait out the TTL/cache expiration for the redirect. I have no idea how long that might be, so I'm wondering if there is any good way to cache-bust the redirect or otherwise undo its long-term effects. I put a simple html meta refresh in the domain folder to replace the 301 to keep the intended functionality in place, but I'm still not able to access the domain's other content normally, even via FTP, etc. Isn't there anything I can do? Otherwise, how long does it take for a cached redirect to expire? It's gonna be a bummer if it's really permanent.

    Read the article

  • Google analytics - drop in traffic

    - by user1001421
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • Clicks counting and crawler bots

    - by Dennis
    I am currently running a small affiliate-program for Facebook users. We use an auto-poster to publish links to fan pages. Every hit is stored in our database and we have included a 24 hour reload block for the IP-addresses. My problem right now is that the PHP script also stores every hit from all the bots that crawls my website. Now I was thinking to block those bots with the robots.txt of my website but I am afraid that this will have a negative effect on my AdSense ads. Does anybody have an idea for me how to work this out?

    Read the article

  • Page appears indexed in Google but not findable for any search terms?

    - by Jeff Atwood
    (Note that I am going to use screenshots here because I suspect writing about this will change the behavior over time.) If you do a Google search for uiviewcontroller best practices either with or without the quotes, you end up with results like this: Note that none of these pages resolve to the actual Stack Overflow question containing those words in the title. They resolve to either a) sites that are mirroring our creative commons data and correctly pointing back to the source question without nofollow, as properly specified by our attribution requirements or b) our own internal links to the question, but not the actual question itself. The actual page with the title ... Custom UIView and UIViewController best practices? ... does exist at this URL ... http://stackoverflow.com/questions/3300183/custom-uiview-and-uiviewcontroller-best-practices ... and apparently it is present in Google's index! But why does it not appear when we search for uiviewcontroller best practices ? We know that Google contains this page in its index Our search terms match the title of the question Stack Overflow has much higher pagerank than the other sites that are mirroring this question under Creative Commons I don't get it. What are we doing wrong here?

    Read the article

  • root folder php scripts not running in nginx

    - by Thermionix
    nginx with php-fpm on ubuntu 12.04 server. attempting to access /var/www/test.php (via https://example.net/test.php) downloads the script instead of executing it. if I place the test.php in a subdirectory, i.e. /var/www/test/test.php it executes. root.conf; root /var/www; include php-fpm.conf; location ~ /\. { access_log off; log_not_found off; deny all; } php-fpm.conf; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.socket; include fastcgi_params; } fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_index index.php; fastcgi_param HTTPS on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; #fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • A good resource to get the most out of Google Analytics

    - by glinch
    I was wondering if any one could offer me some advice as to the best resources out there (ideally books) on google analytics. I have a basic understanding but have a lot of room for improvement. The following book "Advanced Web Metrics with Google Analytics" by Brian Clifton, appears to be a good starting but but is already quite dated, even though published in march 2010. Any advice would be greatly appreciated.

    Read the article

  • Hidden form and SEO

    - by AntonAL
    I'm using hidden forms, to collects some statistics. Will it have any penalty from search engines ? Update 1: I'm collecting some statistics, based on user interaction with my website. For example, POST requests will be sent to server, when: user stops a playing video user has watched a video till it's end etc. Using form_remote_for in Rails, i'm just rendering the form and keep it invisible. The reason on doing that - is to utilize authencity tokens, and just have less to code. Via JavaScript i'm only filling some hidden fields up and initiating form submission.

    Read the article

  • How to SEO Optimize Javascript Image Loader?

    - by skibulk
    I am building an image-centric catalog website. It catalogs collectible gaming cards numbering 100,000+ pages. Competitor sites recieve millions of hits each month, so with the possibility of excessive traffic, I need to moderate image bandwidth while also optimizing for image SEO. I'm looking for some tips on doing so. Each page on the site features one card with appropriate tags and descriptions. There are however four images for each card - one on matte cardstock, one on foil cardstock, one digital, and one digital foil. In a world with unlimited bandwidth and no-wait page loads, I'd simply embed all four images on the main product page with titles, alt tags, and captions to rank them according to their version keyword. In reality a javascript gallery image loader seems appropriate. Here is a simplified example of my current code. Would this affect SEO in any way? Should I be doing anything differently? Note that I don't want to create a page for each image as I'd have to duplicate the card tags and descriptions on each one, diluting PR for the main page. Thanks for any insight! <script type="text/javascript"> document.write(' <img src="thumbnail1.jpg" data-src="version1.jpg"> <img src="thumbnail2.jpg" data-src="version2.jpg"> <img src="thumbnail3.jpg" data-src="version3.jpg"> <img src="thumbnail4.jpg" data-src="version4.jpg"> '); </script> <noscript> <img src="version1.jpg"> <img src="version2.jpg"> <img src="version3.jpg"> <img src="version4.jpg"> </noscript>

    Read the article

  • GA and Unique visitors again

    - by DDEX
    I take care of a company intranet and measure the traffic with GA. I am absolutely sure that there are no more than 5000 URLs in our company and it is impossible to check the intranet from outside the company network. Yet when I check the number of Unique Visitors (UV) in the last year GA says there were 36.500 of them...How is that possible? I thought UV should measure each URL only once in the given time period. Could anybody explain how this actually works? Can it be that the cookie trackers expire after some time and are counted more then once?

    Read the article

  • CSS specificity: Why isn't CSS specificity weight of 10 or more class selectors greater than 1 id selector? [migrated]

    - by ajc
    While going through the css specificity concept, I understood the fact that it is calculated as a 4 parts 1) inline (1000) 2) id (100) 3) class (10) 4) html elments (1) CSS with the highest rule will be applied to the corresponding element. I tried the following example Created more than 10 classes <div class="a1"> .... <div class="a13" id="id1"> TEXT COLOR </div> ... </div> and the css as .a1 .a2 .a3 .a4 .a5 .a6 .a7 .a8 .a9 .a10 .a11 .a12 .a13 { color : red; } #id1 { color: blue; } Now, even though in this case there are 13 classes the weight is 130. Which is greater than the id. Result - JSFiddle CSS specificity

    Read the article

  • How can I convince IE to honor my explicit instructions to make a table column X pixels wide? [migrated]

    - by AnthonyWJones
    Please consider this small but complete chunk of HTML: <!DOCTYPE html > <html> <head> <title>Test</title> <style type="text/css"> span {overflow:hidden; white-space:nowrap; } td {overflow:hidden; text-overflow:ellipsis} </style> </head> <body> <table cellspacing="0" > <tbody> <tr> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 1</span></td> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 2</span></td> <td nowrap="nowrap" style="max-width:30px; width:30px; white-space:nowrap; "><span>column 3</span></td> </tr> </tbody> </table> </body> </html> If you render the above in Chrome you'll see the effect I'm looking for. However render it in IE8 or 9 the width and/or max-width is ignored. So my question is how do get IE to simply let me specify the width of a cell explicitly? BTW, I've tried various combinations of table-layout:fixed and using colgroup with cols and all sorts, nothing I've tried convinces IE to what I'm clearly asking it to explicitly do? If I had any hair before starting this I wouldn't have any left by now.

    Read the article

  • Tracking AdWord ads with different text in Google Analytics

    - by at01
    I'm trying to see how the text in my Google AdWords ads affects my metrics in Analytics. I have auto-linking enabled, so I figured I would be able to automatically see this in Analytics. Unfortunately, if I try to add a second dimension of Traffic Sources-Ad Content, the metrics are only split by the ad's Headline. Most of my tests are changing only the ads' descriptions... So I guess I need to add a tracking parameter like ?campaign=special_text to my URLs? Or is there a way to see the ads split by ad descriptions? Should I add the full suite of utm_campaign/utm_medium/etc parameters? What's the proper way to track these ads which are mostly similar except the ad descriptions?

    Read the article

  • Will Google Analytics track URLs that just redirect?

    - by Derick Bailey
    I have a link on my site. That links goes to another URL on my site. The code on the server sees that resource being requested and redirects the browser to another website. Will Google Analytics be able to know that the user requested the URL from my server and was redirected? Specifically, I set up a /buy link on my watchmecode.net site to try and track who is clicking the "Buy & Download" button. This link/button hits my server, and my server immediately does a redirect to the PayPal processing so the user can buy the screencast. Is Google Analytics going to know that the user hit the /buy URL on my site, and track that for me? If not, what can I do to make that happen?

    Read the article

  • foobar.com working, but www.foobar.com not working?

    - by dpmattingly
    I am setting up a web site for a client. She is using GoDaddy for domain registration, and a hosting company I have never used before. After setting up the nameservers on GoDaddy's side, the address foobar.com (for example) is correctly directing to the new site. However, the address www.foobar.com is redirecting to a 404 page on the hosting company's side. I've been dealing with customer service on the hosting side, and they have told me various things including wait for DNS propagation (which has obviously happened since the 404 page is on their side), and to make sure that the nameservers on GoDaddy's side were entered in lower case instead of upper case (which I know doesn't matter since nameservers are case insensitive). I think I'm getting the runaround from the hosting company, but the client had signed up with them before I came to the project, so if possible I'd like to resolve this issue with them before we start treating it as a loss. Does anybody know what could cause foobar.com to resolve correctly but www.foobar.com to not resolve? How would I best be able to suggest a fix to this through the technical support channels of a hosting company?

    Read the article

  • Resize broswer window below 400px on OS X

    - by David
    Resizing Firefox windows (by dragging) works fine, up until the window is about 400 px wide, at which point the width of the web page content cease to follow the window with. I'm pretty sure it's not a CSS issue, and the same thing goes for Chrome and Safari as well (they won't even let me resize the window < 400 px wide). I can't understand where this limitation comes from. Is it a setting in the browser? A bug? A limitation of the OS?

    Read the article

  • What sort of phone numbers are allowed as the WHOIS contact?

    - by billpg
    I'm getting a non-trivial amount of scam phone calls to the phone number contact listed in WHOIS. Could I change it to a premium rate line? If the scammers want to talk to me so much, make them pay for the privilege! Seriously though, are there any restrictions on the type of phone number I can give as my WHOIS contact? Notwithstanding that it is a phone number which can be used to contact the domain holder. In the UK, cell phones are more expensive for the caller to call than land-lines, so I suspect a significant number of people are already listing a "premium rate" phone number.

    Read the article

  • Why is Joomla based website that was copied off of live server into localhost not showing pictures and throwing 404 error?

    - by Darius
    I have copied Joomla based website via FTP onto my machine and I am trying to make it run on my localhost which is provided by the latest version of XAMPP. I have exported and imported the DB with no problems. I have placed all the files and folders into htdocs folder but when I go to localhost/examplesite all I get is the text that is on the front page but no pictures and it displays 404 Error. Do I need to make changes to .htaccess? If so, can some one point me to the right direction? Thanks

    Read the article

  • Web Hosting Checklist

    - by Chris
    Hello, I am a web developer that is starting to look into hosting his own website. I would like to showcase my programming skills (PHP, MySQl, C#, Wordpress). My knowledge of languages I am OK with but the actually hosting site is where my knowledge starts to get a little shaky. I know the basics (bandwidth, sub-domains, re-write rules) but I would love your input, to help me formulate a check list of certain web-hosting services that I should be on the look-out for. Also I was wondering if there were any reliable hosting providers who give you the option to host both c# code-behinds and PHP code. As I would like to have two versions of my site, one in C# and one in PHP the hope is that if I need to look for another job this website will help me show possible employers my server side knowledge. I hope this is enough info, I did some researching online but found a bunch of unless articles and I've always have had luck on the StackExchange sites. So hopefully you, can help me. Thanks alot.

    Read the article

  • How can I tell GoogleBot that a subdirectory is now a subdomain?

    - by cwd
    I had about a million pages of a catalog indexed under a subdirectory, and now that's moved to a subdomain. GoogleBot is crawling each one of them and getting a 301 redirect to the new location. Even though I have set up the redirect rule in the apache sites-enabled configuration file, (i.e. it's early on when apache does the redirect - PHP is not even getting loaded), even though I have done that, the server isn't handling the load well. GoogleBot is making around 5 requests per second, and on top of my normal traffic that is hiking up the CPU for a few hours at a time. I checked in Webmaster Tools and the corresponding documentation for a way to let Google know that the content had been moved from a subdirectory to a subdomain, but with little luck. Basically the most helpful thing I saw said to just send 301 headers for the new location. How can I tell GoogleBot that a subdirectory is now a subdomain? If that is not an option, how can I more efficiently send 301 redirects out for a particular subdomain? I was thinking perhaps the Nginx server but I'm not sure that I can run both Apache and Nginx side by side on port 80 for different subdomains.

    Read the article

  • Having good domain name and using domain aliases ( I use notlong.com)?

    - by Michal P.
    I use only free servers and after creating my website: http://pundaquit.republika.pl I decided to make access to that domain by simple domain name . I decided to use domain alias http://notlong.com/ service and have simple domain name http://pundaquit.notlong.com The second advantage of using alias here was to be independant from my file host which I will have to change. I haven't found a better alias service like notlong, because notlong.com is easy to remember. After that I encounter many problems: * most of forums or social services treat notlong adress as a spam, * Bing so far hvn't accepted http://pundaquit.notlong.com domain and others. Is it another way to have good free domain name? How about the situation when your hosting server will inform you to expire? Only a lasting layer of domain aliases make you independant from the real file hosts.

    Read the article

  • HTTP 303 redirection and robots.txt

    - by Ian Dickinson
    On a site I'm working on, we're using the HTTP 303 redirect pattern (see this article for background) to distinguish between information and non-information resources. So: some URL's under /id get redirected to dynamically-created pages under /doc. These dynamic pages are built from a database, and contain links to other /doc/ resources, so in general we don't want them to be crawled. Our robots.txt contains: Disallow: /doc However, we do want the non-redirected pages under /id to get indexed by Google et al: Allow: /id So the question I have, which I can't find an answer to so far, is: if an allowed /id page 303-redirects to a /doc page, will it still be blocked by robots.txt? If yes, we're OK, but otherwise I'm going to disallow all /id resources in the robots file, as having the crawler hammer the db would be worse than losing search indexing for the /id pages.

    Read the article

  • Best practice for bulk eCommerce product upload?

    - by Or W
    I'm thinking about opening a large online store for Jewelry, the one thing that really bothers me is managing the actual operation of taking pictures, uploading and describing all the products. I'm trying to figure out the best way to do it, in terms of performance or the least time consuming. Just a few things to keep in mind I'll have over 1,000 items in the online store I'll have 3-4 pictures for each item, I'm using a DSLR camera if it makes any difference. I'm going to probably use Magento, unless you have better experience with another eCommerce platform that will help me get this done quickly. I'll need to randomly(?) create a product code for each item.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >