Search Results

Search found 9757 results on 391 pages for 'shekhar pro'.

Page 126/391 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • what is the best way to create a blog

    - by jogesh_p
    like usually i have a query related to my blog, but i am bit confuse about a question. i want to know the best way to create a blog. Like: i have a domain, http://www.example.com and i want to create my blog on this domain, but the issue the could i use the sub-domain for my blog like http://blog.example.com or i have to do like http://www.example.com/blog and why beneficial according to SEO.

    Read the article

  • Using a front controller design pattern doesn't allow images to be served

    - by MrMe TumbsUp
    I am currently using a front controller. All requests for my website go through it. I have a problem with image links like: <img src="img/image.jpg" /> Then my front controller will try to dispatch the request to: application/controller/ImgController.php. Then the image won't load. I think it has something to do with the .htaccess file: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L]

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • Is this a link scheme? If so, what to do? what problems can i face?

    - by guisasso
    I was asked to remodel a website, and decided to check its rank on alexa. Surprisingly, there are many, many different websites linking to it, none relevant. One particular thing about it is that none of these urls work, and they all display the exact same error when accessed, which to me is a very good indication that this is some sort of linking scheme. (besides the somewhat obvious names, it even says scheme in one of the urls !?) If so, how should i proceed about this website? What can i do if this is in fact a scheme, how can this hurt the website, what types of problems can i face, and what can i do about it? addurlnow . info dirlist15.addurlnow . info/Business___Economy/Services/page-12.html linkdirectory101 . info dirlist16.linkdirectory101 . info/Business___Economy/Services/page-15.html seonetblog . info dirlist52.seonetblog . info/Business___Economy/Affiliate_Schemes addurls . us dirlist21.addurls . us/Business___Economy/Services/page-10.html webdirectoriessite . info dirlist20.webdirectoriessite . info/Business___Economy/Services/page-6.html addurlstore . info dirlist10.addurlstore . info/business___economy/services/page-14.html ukwebdirectorys . info dirlist21.ukwebdirectorys . info/Business___Economy/Services/page-13.html

    Read the article

  • How can I allow robots access to my sitemap, but prevent casual users from accessing it?

    - by morpheous
    I am storing my sitemaps in my web folder. I want web crawlers (Googlebot etc) to be able to access the file, but I dont necessarily want all and sundry to have access to it. For example, this site (superuser.com), has a site index - as specified by its robots.txt file (http://superuser.com/robots.txt). However, when you type http://superuser.com/sitemap.xml, you are directed to a 404 page. How can I implement the same thing on my website? I am running a LAMP website, also I am using a sitemap index file (so I have multiple site maps for the site). I would like to use the same mechanism to make them unavailable via a browser, as described above.

    Read the article

  • Google doesn't seem to update the description or title of my homepage

    - by Dayson
    Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from its contents. So the description in the search results said, "Coming soon! Visit epicwhale.org for updates." It's been a few weeks since we launched our website. We've even created a sitemap and submitted it to google. In the google webmaster panel, the pages have been crawled and all the pages are appearing as expected on google, EXCEPT the homepage which is still not updated! The title and description of the homepage in google search results still says coming soon.. The website I am referring to is textmewidget.com and below are the images of the search result. Google: http://i.imgur.com/vAkJg.png I checked on bing too, but it appears to be fine there. Bing: http://i.imgur.com/Q8O6L.png All other pages seem to be indexed fine on google. I don't even have any crawl errors in my reports. So what seems to be the problem? I've already waited for 2 weeks. Thanks in advance!

    Read the article

  • Google analytics goal funnel visualization issues

    - by Lauren
    This is the goal funnel for checkout. Does anyone have any idea where the "/" is coming from? The cart page is at site: game on glove dot com (I don't want this stackoverflow page being indexed in google particularly well). Go to the site, click on the order button, make your selection, and click the button to enter the cart (it resolves to /Cart and /Shop-Cart). I believe I used the regular expression matching to match "cart". So why the "/" (I don't know what is causing the home page to reload when users are on the Cart page within a Colorbox lightbox where the only way back to home or "/" is to hit the exit button in the top right of the lightbox)? Here's my one guess for the former question but it doesn't seem likely: See the "check out with paypal" button? If you hovered over it, it does default to the home page which is what might be the "/"... but it really redirects the user to the paypal.com page so it shouldn't also load the home page.

    Read the article

  • Will using two different tracking codes affect my SERP

    - by Danny Hefer
    Hello everyone and thanks for your time! I am now facing a problem after a site migration. New site is basically an improved version of old site, with the same content and some extras. After pointing the domain name to the new site, the old site was still online for a while but didn't get any traffic. The new site has its own tracking code. So, old tracking code has age (something like 7 years) but no visitors for a month, but new tracking code is a month old with an acceptable traffic. How to you think google will react if I add old tracking code to new site? Thanks by advance!

    Read the article

  • Reverse proxying only a specific URL

    - by Bart Silverstrim
    I have a web server at www.ourcompany.com running Apache2. Using the proxy modules, I am able to (for example) get 172.16.0.5, an internal IP device, to be accessed on www.ourcompany.com/device. The trouble is that anyone can play with or explore the device using strings sent to www.ourcompany.com/device/change/settings/here.html. I'd like the reverse proxy to only work for a specific URL; www.ourcompany.com/device/you/must/use/this while anything else will be rejected if requested. Is there a setting that can be used to do this, or is it a simple rewrite condition placed in the virtualhost for the site under sites-enabled? What is the simplest, most maintainable way to sanitize requests to the internal device through the reverse proxy? Running Apache2 on Ubuntu.

    Read the article

  • PhP Login/Register system [migrated]

    - by Marian
    I found this good tutorial on creating a login/register system using PhP and MySQL. The forum is around 5 years old (edited last year) but it can still be usefull. Beginner Simple Register-Login system There seems to be an issue with both login and register pages. <?php function register_form(){ $date = date('D, M, Y'); echo "<form action='?act=register' method='post'>" ."Username: <input type='text' name='username' size='30'><br>" ."Password: <input type='password' name='password' size='30'><br>" ."Confirm your password: <input type='password' name='password_conf' size='30'><br>" ."Email: <input type='text' name='email' size='30'><br>" ."<input type='hidden' name='date' value='$date'>" ."<input type='submit' value='Register'>" ."</form>"; } function register(){ $connect = mysql_connect("host", "username", "password"); if(!$connect){ die(mysql_error()); } $select_db = mysql_select_db("database", $connect); if(!$select_db){ die(mysql_error()); } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $pass_conf = $_REQUEST['password_conf']; $email = $_REQUEST['email']; $date = $_REQUEST['date']; if(empty($username)){ die("Please enter your username!<br>"); } if(empty($password)){ die("Please enter your password!<br>"); } if(empty($pass_conf)){ die("Please confirm your password!<br>"); } if(empty($email)){ die("Please enter your email!"); } $user_check = mysql_query("SELECT username FROM users WHERE username='$username'"); $do_user_check = mysql_num_rows($user_check); $email_check = mysql_query("SELECT email FROM users WHERE email='$email'"); $do_email_check = mysql_num_rows($email_check); if($do_user_check > 0){ die("Username is already in use!<br>"); } if($do_email_check > 0){ die("Email is already in use!"); } if($password != $pass_conf){ die("Passwords don't match!"); } $insert = mysql_query("INSERT INTO users (username, password, email) VALUES ('$username', '$password', '$email')"); if(!$insert){ die("There's little problem: ".mysql_error()); } echo $username.", you are now registered. Thank you!<br><a href=login.php>Login</a> | <a href=index.php>Index</a>"; } switch($act){ default; register_form(); break; case "register"; register(); break; } ?> Once pressed the register button the page does nothing, fields are erased and no data is added inside the database or error given. I tought that the problem might be the switch($act){ part so I removed it and changed the page using a require require('connect.php'); where connect.php is <?php mysql_connect("localhost","host","password"); mysql_select_db("database"); ?> Removed the function register_form(){ and echo part turning it into an HTML code: <form action='register' method='post'> Username: <input type='text' name='username' size='30'><br> Password: <input type='password' name='password' size='30'><br> Confirm your password: <input type='password' name='password_conf' size='30'><br> Email: <input type='text' name='email' size='30'><br> <input type='hidden' name='date' value='$date'> <input type='submit' name="register" value='Register'> </form> And instead of having a function register(){ I replaced it with a if($register){ So when the Register button is pressed it runs the php code, but this edit doesn't seem to work either. So what can the problem be? If needed I can re-add this code on my Domain The login page has the same issue, nothing happens when the button is pressed beside emptying the fields.

    Read the article

  • last-modified/etags - to include or not?

    - by Kae Verens
    Google's PageSpeed plugin suggests that a website should include Last-Modified and ETag headers: Specify a cache validator "Resources that do not specify a cache validator cannot be refreshed efficiently. Specify a Last-Modified or ETag header to enable cache validation" However, Apache suggests that by not including them at all, we speed up websites by eliminating If-Modified-Since and If-None-Match requests: http://www.askapache.com/htaccess/apache-speed-last-modified.html these are in direct opposition - which should be implemented? I'm leaning towards Apache's suggestion, as when I want a file cached, I don't want it refreshed.

    Read the article

  • Easiest solution to setup payments for a conference registration page?

    - by Keith G
    I've got a fair amount of website development experience, but I've been asked to setup a conference registration page in short order. However, I have absolutely zero experience with shopping carts, payment processing, etc. What is the absolutely quickest and easiest way to get this thing up and running? Here are my criteria: Site is currently hosted on Godaddy.com and someone has suggested using their QuickCart We cannot use any option that visits the paypal.com domain because it has been blocked my a large segment of the potential audience (on a military base). Need a $0 option for speakers Cancellations can be accepted, so maybe something that could handle that would be a bonus There is no "product" other than a confirmation that they have registered for the conference.

    Read the article

  • How does one block unsupported web browsers?

    - by Sn3akyP3t3
    Web browsers with an end of life no longer receive security updates which not only makes them vulnerable to the end user, but I imagine its not safe for the server's which receive visits by them either. Is it practical to block or enforce and notify the end user that their browser is unsafe and unsupported? If so, how would one achieve that? I don't know of any official or crowd-sourced listing with that information to parse and keep up to date. I'm aware that the practice can be custom built with User Agent parsing and feature detection for HTML5 enabled browsers.

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Why would an IE8 in a desktop has a 'Tablet PC 2.0' in its user-agent string?

    - by ultrajohn
    I am just curious, why would a windows 7 desktop, installed with ie8, have Tablet PC 2.0 in its user agent string. Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; Tablet PC 2.0) Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; Tablet PC 2.0) Is this a feature in Windows 7, how can I turn this off in IE8? Other browsers on the same computer don't have such string in the user-agent string they send. As a result, one of our web application confuses this particular desktop client as a mobile (because of the tablet), hence returns the mobile version of our website to it. Thank you!

    Read the article

  • Digg alternatives for blog and unpopular users? [closed]

    - by Wladimir Ivanov
    all. I'm struggling to build an [B]audience[/B] for [B]electronic blog[/B] . The blog is relatively new and has around 30 pages. The unique visitors I get are approximately [B]120 - daily[/B]. I know about directories, rss, comments and guest blogging, but is there other more effective strategy to build some quality audience? As I see nowadays there aren't enough materials in my country about this. What about digg and reddit? Everytime I post some link there: no traffic comes to me. Can you suggest me other digg/reddit/stumbleupon tactics to get followers or there are similar sites which would tend to give me some serious traffic. Can you suggest sites appropriate for linking to music blog? Best regards.

    Read the article

  • "Search Friendly" domain names

    - by Ben
    We bought a few search friendly domain names for the CPA site that I manage. Each of the domains we bought has the name of a nearby city and the word cpa in front of, or behind the city name. The plan is to create a landing page for each of these domains with useful information about business filings, ect. specific to that city, as well as directions to our office from that city. The question is how to best utilize these new domains: Should each domain be set to a 301 redirect to mainsite.com/city ? Should each domain be it's own single page mini-site that links to mainsite.com ? What other options are there and what are the pros/cons? Remember the goal is to be more relevant in searches that use a nearby city name in their search for CPA/accounting services.

    Read the article

  • n00b needs some PHP syntax guidance [closed]

    - by Michael
    If you look at http://www.cruc.es/?paged=12/ and go to the bottom of the page you'll see the bottom navigation with the next and previous options. I've been able to make the page numbers work by changing page to paged= in the code. I don't know enough about PHP to get the previous/next options to work. Any advice would be appreciated and I've pasted the code below. Thank you: n00b if ( $query->found_posts > $query->query_vars["posts_per_page"] ) { echo '<ul class="paging">'; // Previous link? if ( $page > 1 ) { echo '<li class="previous"><a href="'.$baseURL.'/page/'.($page-1).'/'.$qs.'">previous</a></li>'; } // Loop through pages for ( $i=1; $i <= $query->max_num_pages; $i++ ) { // Current page or linked page? if ( $i == $page ) { echo '<li class="active">'.$i.'</li>'; } else { echo '<li><a href="'.$baseURL.'/?paged='.$i.'/'.$qs.'">'.$i.'</a></li>'; } } // Next link? if ( $page < $query->max_num_pages ) { echo '<li><a href="'.$baseURL.'/page/'.($page+1).'/'.$qs.'">next</a></li>'; } echo '</ul>'; }

    Read the article

  • django & postgres linux hosting (with SSH access) recommendations

    - by Justin Grant
    We're looking for a good place to host our custom Django app (a fork of OSQA) and its postgresql backend. Requirements include: Linux Python 2.6 or (ideally) Python 2.7 Django 1.2 Postgres 8.4 or later DB backup/restore handled by the hoster, not us OS & dev-platform-stack patching/maintenance handled by the hoster, not us SSH access (so we can pull source code from GitHub, so we can install python eggs, etc.) ability to set up cron jobs (e.g. to send out dail email updates) ability to send up to 10K emails/day good performance (not ganged up with a zillion other sites on one CPU, not starved for RAM) FTP or SCP access to web logs dedicated public IP SSL support Costs under $1000/month for a relatively small site (<5M pageviews/month) Good customer service We already have a prototype site running on EC2 on top of a Bitnami DjangoStack. The problem is that we have to patch the OS, patch postgres, etc. We'd really prefer a platform-as-a-service (PaaS) offering, like Heroku offers for Rails apps, where all we need to worry about is deploying our code instead of worrying about system software patching and maintenance. Google App Engine is closest to what we're looking for, but they don't offer relational DB access (not yet at least). Anyone have a recommendation?

    Read the article

  • Is it good to buy multiple domains for competitive reasons?

    - by lowestofthekeys
    I am attempting to convince the higher ups at my company that spending $55 to renew one domain for a year is bad when they end up having 3-4 domains names for one website. They're reasoning for doing so is to keep these domains names out of the hands of the competition. For example, the company name is Pie Consulting & Engineering. They want to buy up pieforensicconsulting.com to keep it out of the hands of a competitor (we also do forensic engineering). Could a competitor use that domain in any kind of diabolical way? I mean I figure if someone is typing in pieforensiconsulting into the url field, they know what they're looking for and if it redirects to another company, they're not just going to stay on the site.

    Read the article

  • Strange robots.txt - how and why did it get there?

    - by Mick
    I recently created a very simple, pure HTML website which I have hosted with "hostmonster". Hostmonster had very good reviews on some comparison website and in general so far they appear to be perfectly good in every way... At least I thought so until just now... I have been making lots of edits to my site on an almost daily basis. My site now appears on the first page (7th on the list) for my most important keyphrase when doing a google search. But I did notice some problem with the snippet chosen by google. I asked a question on this site about snippets and got some great answers. I then made some modifications to my meta data and within 48hrs the google snippet for my search was perfect. The odd thing though was that looking at the "cached" version google had, it appeared that the cache was still very odl- like three weeks previous. This seemed very odd - how could it be that the google robots had read my new metadata without updating the cache? This puzzled me greatly. Just now it occurred to me that maybe I had some goofey setting in my robots.txt file. I didn't actually remember even making one - but I thought I'd have a look just in case. Much to my horror, I saw that there was a robots.txt and it contained the disturbing text below: sitemap: http://cdn.attracta.com/sitemap/728687.xml.gz Intuitively this looks like some kind of junk, spam trick, and I had indeed been getting some spam from "attracta". So my questions are: 1. Should I simply delete this robots.txt? 2. Was the file there all along - placed there because of some commercial tie-in between attracta and hostmonster. 3. Does the attracta robots file explain the lack of re-caching?

    Read the article

  • Changed url from non www to www...Google Indexing

    - by user20321
    I have recently changed (about 1 week ago) my url from non www version to www version. I told my hosting company to do this and they did it successfully all my urls are directed to www version. But google is indexing my non www version on the search results. I have updated new content on my website and google indexes that content with the changed url i.e with prefix www but the mainpage i.e the site name is still shown without www and its not updated. I have checked that my www.sitename.com is listed on google but not shown when I type www.sitename.com. So how much time does it take to remove the old urls from indexing and updating into new urls ??????

    Read the article

  • SEO for replacing blog content, but keeping the same page URL

    - by cphill
    This might not have any major impact on the SEO, but basically I have random blog at this URL: http://example.com/blog (not a real URL), that I am removing and replacing with a company blog. I want to use the http://example.com/blog URL address, but I'm not sure how this would effect my SEO since this random blog content that I am removing has the example.com/blog URL prefix. Would I just add a 310 redirect for those old blog articles and leave the basic /blog URL without any redirects?

    Read the article

  • Does Google sometime prevent new white hat sites from ranking at all in some verticals?

    - by JVerstry
    Assuming someone wants to implement a new viagra or akai berry e-commerce website. There is a lot of competition and this site does not really bring something new, other than a new online counter to buy products at a nice price. Assuming this site does not use any black hat techniques and that it stays with Google quality guidelines, and assuming it has no (or few) backlinks (from non-authoritative websites). Assuming this website's pages are indexed properly in Webmaster Tool, and that no penalties are reported. No site improvements are suggested. Google crawls the site daily as reported in GWT. No robots.txt configuration issues. Does Google sometime decide to no rank this site for any user query (for weeks), because of lack of original content? The reason I am asking this is that I am trying to understand the possible cause of a similar situation I am observing with two sites. If so, what is the way out to start ranking for these site? If not, does it mean the cause is elsewhere for sure? Any confirmed info to get out of the maze is welcome.

    Read the article

  • Safety of purchasing country-specific domains from registrars?

    - by Marc Bollinger
    None of the previous questions tackle some of the one-off (or further) countries' registries, beyond .co.uk, .it, et al. or else I'd have found an answer myself. Is it safe to buy a domain from a foreign country TLD from a registrar? http://www.iana.org/domains/root/db/ I'm just looking for information for a vanity domain, so obviously I'm alright without an answer, but it's an unasked question (or at least, unanswered), and I'm not exactly in a hurry to give my credit card information over country lines, sight unseen.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >