Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 235/592 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • Is it possible to use canonical tag in Blogger posts?

    - by John Sanjay
    I found one of my blog post was cached by Google (www.example.com/post.html). I found that comment page of the post was also cached (www.example.com/post.html?showComment=1372054729698). These two pages are showing in Google SERP when I checked cached posts of my blog. Is it possible to use canonical tag on the post www.example.com/post.html?showComment=1372054729698 so that Google won't penalize my original post? Is there any other ways to redirect a blog post?

    Read the article

  • What is a good robots.txt for WP ?

    - by Steven
    What is the "best" setup for robots.txt? I'm using the following permalink structure in Wordpress: /%category%/%postname%/. My robots.txt currently looks like this (copied from somewhere a long time ago): User-agent: * Disallow: /cgi-bin Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content/plugins Disallow: /wp-content/cache Disallow: /wp-content/themes Disallow: /trackback Disallow: /comments Disallow: /category/*/* Disallow: */trackback Disallow: */comments I want my comments to be indext. So I can remove this, right? Do I want to disallow indexing categories because of my permalinkstructure? An article can have several tags and be in multiple categories. This may cause duplicates in google. How should I work around this? Would you change anything else here?

    Read the article

  • Help slecting dedicated server with good disk I/O & network

    - by JP19
    Hi, I am looking for a cheap dedicated server. (I was earlier happy with VPS, untill I realized that the disk I/O is not at all reliable and depends on what your neighbours are upto at the moment). I was browsing through http://www.lowenddedi.net/the-database I don't understand memory speed and NIC speed columns at all. What will be their affect? Do I need to worry about them? Also, can someone help suggest a provider, with following criteria: 1) Good & reliable Network 2) Price <= $60/month. Thanks JP

    Read the article

  • Finding a Payment Gateway?

    - by Lynda
    I have a client who would like to sell glass pipes online. The problem I run into is with the payment gateway. Glass pipes fall into two categories drug paraphernalia or tobacco product. This leads me to here and asking: Does anyone know of a payment gateway that will process payments for glass pipes? Note: Doing some Google searching I read that Authorize.net will accept tobacco but when I spoke with them they said they do not.

    Read the article

  • Which hosted chat solutions offer the following?

    - by David
    I am looking for a chat room solution similar to the one on StackExchange to facilitate more responsive communication between the contributors on Open-Org.com. My criteria are the following: No Flash (this rules out more than half) Full history (meaning that it is possible to access all previous conversation for future reference. Very customizable No ugly IRC stuff filling up the chat view (I do not want to see who joined an who left etc.) No private conversations possible (this is just not in the spirit of Open-org.com) A hosted solution with a reasonable price. These criteria are so different from this question, so this is not a duplicate question. The service which matches this the closest is Chatroll.com. However, at 199$ per month their prices are outrageous.

    Read the article

  • Is there a CDN for backbone.marionette?

    - by Thunder Rabbit
    Getting started with Backbone and Marionette, I was about to copy the file at https://github.com/marionettejs/backbone.marionette/blob/master/lib/backbone.marionette.js to my local server, but wondered if there was a CDN version of it. For Underscore and Backbone dev, I'm including these two files, respectively: http://documentcloud.github.com/underscore/underscore-min.js http://documentcloud.github.com/backbone/backbone-min.js Is there a similar URL for backbone.marrionette.js?

    Read the article

  • How to 301 redirect from old query string urls to CakePHP Canonical urls?

    - by Daniel Bingham
    I currently have a .htaccess file that looks like this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /index.php?url=item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] It is meant to 301 redirect my old query string based URLs to new CakePHP urls. This will successfully send users to the correct page. However, Google doesn't seem to like it (see below). I previously tried doing this: RewriteCond %{QUERY_STRING} ^action=view&item=([0-9]+)$ RewriteRule ^index\.php$ /item/%1 [R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php?url=$1 [QSA,L] But that fails. The second rewrite rule doesn't seem to catch the rewritten URL. It goes straight through. Using the first version wouldn't be a problem, except that I suspect that is what is choking up Google. It hasn't indexed my sitemap full of the new URLs. My old sitemap had been fully indexed and all the URLs are in Google's index. But it isn't following the redirects from the old URLs to the new. I have a 'not followed' error for every one of the query urls that was in my old sitemap. Am I properly using a 301 redirect here? Is it the weird rewrite rule? What can I do to send both Google and users to the proper page and save my page rank?

    Read the article

  • I want to host clients' websites, but not their email. What's the easiest way to handle this?

    - by Phil
    My company lets non-technical users build their own niche industry websites on our server, which we host. they can currently point their nameservers at their registrar to us, which ends up with them no longer having access to their email if they've already set it up through said registrar. We don't want to interfere with their existing email, nor do we want to get into the business of setting up email for them through our service. Thus, having them point A records/cname to us would work, but is this too complex for a non-technie user? We thought of having them point nameservers to us but pointing the MX records back to them, but this is also beyond their scope. Is there an easy way to 'point records' at their initial state? Any other ideas/feedback?

    Read the article

  • How does Wikipedia's SEO work?

    - by Josh Siegl
    i'm sorry if this question is misplaced or doesn't belong here. I'm currently developing an app for android and IOS and of course i'm thinking about the best ways to market it. Last night I Google'd somebody else's app and the third link in was a Wikipedia page on it, I never even thought of apps having Wikipedia pages, but alas there it was. And of course it was very helpful in determining exactly what the app did and in what cases it was useful for (something that's absolutely crucial for potential customers to understand). So then I got to thinking that I should create a Wiki for my app, but how does Wikipedia apply SEO? I know that the question could be overly complicated or specific, i'm just looking for general answers. For instance when somebody Google's my app, where does Wikipedia display on the results? When I create a Wiki for my app, how do I ensure that the Wikipedia page shows in the search results (is there any way to do that? ) I'm sure i'll find all of this out later when I create a Wiki for my app, I guess i'm just asking this out of curiosity. So how does Wikipedia's search engine optimization work? (on a page by page basis)

    Read the article

  • Garbled text in server logs

    - by Glenn Dayton
    I recently looked over my server's logs and I found a bunch of garbled text. Here is a link to the full log, and here is a snapshot of what it looks like: ¹^œÌÓûFF™ÃŒ-ôÚÏàÃÒNRs§cÝi ~F#J"|³Ôq0ã~QQbA ¼¹¦’š¶É3œßå<ú€Ç©XAwdL?R°ÝbÒt©ôÇ·Æ…÷q˜ÇѺ| Þ,߯¡Êr yR¤Q¹Jêlš‘AzP\ ¦ÂY„ÉÉ,æ™ U™»ì³ÔÝáCÿ42‹Ö.nŽÉ2%ÓN8i4Œ®¿‘•"-se•䎿ÊÁ§€þ 8åv%'#Äpžs/ÙÍ:¡1ÑÖÃå ºu|Q®!ÏyÆ,­NR@¶ËȯRDkã=ÿÀܸ ›¼Ô ’ð>ÓÌBftdÃ8–é}‰[øbãÝÁ嘲b¾W n´tT­œpäNëëÔ ·RUÓP+ÅuKÁ£¬\âÌ®:J<ÍÁ0:Q%ª(Œ˜E-ÁI:ï™4®hæœT†«);°Çda@´#èì}‡£ü•{57ý]¼|øÓñð÷ÈÌð‡MkŠâ•C~$Óô#ÙV¾Núå.#Á]vôžóæ» V&8)%øVSž“±ÔQLåÓý1–ŽÃßQ$¹ýž")ÈûQcÄý_ÔüGP=s‹vq#Pmoo.tigertutorialscomµÐOKÃ0ð»Ÿâ‘ØH“ What is this? and is someone trying to do something to my website?

    Read the article

  • Multisites Network SEO::Can self-referencing canonical tag(rel="canonical") inside article improve google rating?

    - by user5674576
    Hi, Can self-referencing canonical tag(rel="canonical") inside article improve google rating? The Case: Company have 40 sites with original content and 1 main site with some of 40 sites articles. Main site have rel="canonical" in each article Should article in original site have also rel="canonical" for self-referencing? example: inside main network site(reference to other site):<link href="http://site7.com/article25" rel="canonical" /> inside original network site(self-reference):<link href="http://site7.com/article25" rel="canonical"/> Thanks in advance

    Read the article

  • REL ME tag - trying to figure it out

    - by nekdo
    Regarding http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1229920 to scrolled down section ''Examples'' to the point ''1.'' to the second code line which is: <a rel="me" href="https://plus.google.com/105240469625818678725/"> <img src="//www.google.com/images/icons/ui/gprofile_button-16.png"></a> On the page says that I have to add this line to the Contact Me page of own website in order to get Google Profile button. Exact code which one should be copy and pasted I am able to get here: http://www.google.com/webmasters/profilebutton/ Questions: 1). As you can see on the second URL, to make Google Profile button I need to use "author" tag and not "me" tag. But the first URL which I showed (the line in this message above) shows that I have to use "me" tag and even without this: width="32" height="32". I am already aware that I have to type (second URL) my own Google Profile URL. So do I just MANUALLY ( ! ) change this: <a rel="author" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png" width="32" height="32"></a> to this (note: two changes done): <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> Is this correct? I assume that plus.google.com is the same as profiles.google.com (both is URL of Google Profile). 2). If I was wrong with my first question then the second answer probably won't be even useful but still: Where exactly should I paste the code: <a rel="me" href="https://profiles.google.com/109412257237874861202"> <img src="http://www.google.com/images/icons/ui/gprofile_button-32.png"></a> inside Author Page of own website? I think it doesn't matter where. Also: will this icon be for sure enough or do I also have to make such anchor text with rel me in a ''shape'' of text (for word sentence such as ''Look At My Google Profile'')? Or is just icon really enough? 3). In the same section (''1.'') of the same page (link [first one] provided above) it says that I need to use first author tag to link to Author/Contact Me page of own website in order to later use Me tag. But I think in the explanation is little mistake. Shouldn't be instead of: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene/">Jay Greene</a> this: <a rel="author" href="http://www.cnet.com/profile/iamjaygreene.html">Jay Greene</a> ?

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

  • Why is <my site url> not indexed by search engines? [closed]

    - by Henrik Erlandsson
    was indexed fine until about a year ago. The only thing I can think of is that search engines throw up at using h5 before h4, or that some person (fantasizing now) has reported my site as unsafe to every search engine. However, I'm not here to speculate. The site validates, and has an RSS feed on the front page, for Pat Morita's sake! To me, it looks like the kind of site search engines would feast on. It's got more than a dozen blogs on it, if nothing else. Hah. :) I was thinking you could identify basically what has changed in search engines (currently, google, yahoo, bing which used to work fine) the last year to make them not find news and blog articles on this site. The site was submitted to Google, oh, way back in 2006. With online crawler tests I get mixed results, some crawlers index fine, some go blank. I don't really know which ones are reliable and am looking to you guys for advice on that. Yes, I am prepared to again verify my site with Google and upload a sitemap, but that's not the topic here. I really would first like to know what change on the site last year could make search engines not index it. (Yees, the robots.txt is fine. Should be nothing to discourage bots there.) It's a very intriguing problem. One which I have yet to find the reason for but would like to know the reason for. Any and all input appreciated, but I would heavily enjoy pertinent advice the most. ;) Edit: Some google searches that don't show up include - aca630 All of which are posted in the news and blogs that are on the front page there. Now, these search terms are extremely specific as the term in is almost unique on the web and ACA630 is also a very qualified search term that can't be confused with mainstream search terms.

    Read the article

  • How to secure robots.txt file?

    - by CompilingCyborg
    I would like for User-agents to index my relative pages only without accessing any directory on my server. As initial thought, i had this version in mind: User-agent: * Disallow: */* Sitemap: http://www.mydomain.com/sitemap.xml My Questions: Is it correct to block all directories like that - Disallow: */*? Would still search engines be able to see and index my sitemap if i disallowed all directories? What are the best practices for securing the robots.txt file? For Reference: Here is a good tutorial for robots.txt #Add this if you want to stop Alexa from indexing your site. User-agent: ia_archiver Disallow: / #Add this to stop duggmirror User-agent: duggmirror Disallow: / #Add this to allow specific agents User-agent: Googlebot Disallow: #Add this to allow all agents while blocking specific directories User-agent: * Disallow: /cgi-bin/ Disallow: /*?*

    Read the article

  • mssql or mysql: learning

    - by Yehuda
    I have been using MySQL for about 9 months now for websites, and i have become quite good in getting what I want out of the Database. However i am still missing most of the complicated parts. I have an excellent tutorial but it is on sql-server 2008. 1) Is it worth me switching over to mssql (I understand the SQL is different) so that I will learn all about SQL and databases in general? 2) Do most people use MySQL or MSSQL 3) What is best practice, and I am talking mainly for websites.

    Read the article

  • Creating an encrypted, web-based proxy

    - by Jason
    I have moved to Asia where my internet connection is censored and I'd like to check my messages from social sites which happen to be blocked. As virtually all proxy servers are blocked in this country, I've decided to attempt to roll my own encrypted proxy server. Please note, the key word here is encrypted—if the sniffer sees anything like f@c3b00k or w:k:p3d:ia travelling down the wire I'm had. I have a website hosted with GoDaddy (Windows with PHP 5.2 & IIS 7). Is there any way I can set up an encrypted proxy through this service? If so, how, and what open source tools are available to use?

    Read the article

  • Page displaying sections using opacity in CSS3 but without navigating or scrolling down [closed]

    - by Senthil Kumaran
    Here is my app - http://www.shalgreetings.com/ I am trying to override the scroll bar going down to a imagesection in CSS, so that whole app is visible with logo, header and other controls all the times when people navigate through different #sections. I am not sure where in the CSS, I am making the mistake as clicking on #sections traverses the page. Here is this app's original inspiration code, which has got this right. Anyone can point me where the problem seems to be in the above app?

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • SEO for replacing blog content, but keeping the same page URL

    - by cphill
    This might not have any major impact on the SEO, but basically I have random blog at this URL: http://example.com/blog (not a real URL), that I am removing and replacing with a company blog. I want to use the http://example.com/blog URL address, but I'm not sure how this would effect my SEO since this random blog content that I am removing has the example.com/blog URL prefix. Would I just add a 310 redirect for those old blog articles and leave the basic /blog URL without any redirects?

    Read the article

  • .htaccess language redirects with seo-friendly urls

    - by jlmmns
    How do I setup my .htaccess file to detect several languages, and redirect them to specific seo-friendly urls? Basically every url needs to go to index.php?lang=(...) So, for English language detection http://mysite.com has to go to http://mysite.com/en/ (index.php?lang=en) my .htaccess as of now (not working): RewriteEngine On RewriteCond %{HTTP:HOST} http://mysite.com/ RewriteCond %{HTTP:Accept-Language} ^en [NC] RewriteRule ^$ http://mysite.com/en/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^de [NC] RewriteRule ^$ http://mysite.com/de/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^nl [NC] RewriteRule ^$ http://mysite.com/nl/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^fr [NC] RewriteRule ^$ http://mysite.com/fr/ [L,R=301] RewriteCond %{HTTP:Accept-Language} ^es [NC] RewriteRule ^$ http://mysite.com/es/ [L,R=301] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-l RewriteRule ^(en|de|nl|fr|es)$ index.php?lang=$1 [L,QSA]

    Read the article

  • Adding SPF records in GoDaddy

    - by Mayank swami
    I have the GoDaddy hosting and send mail using the following code: $to = "[email protected]"; $subject = "Test mail"; $message = "Hello! This is a simple email message."; $from = "[email protected]"; $headers = "From:" . $from; mail($to,$subject,$message,$headers); echo "Mail Sent."; When the mail arrives at its destination I see the following (in red outline) I don't want to show the "via server" and for that there is an option to add a SPF record. To do this I have followed the instructions in this page: Managing DNS for Your Domain Names but it's not working. After that i have tried: v=spf1 include:_spf.google.com ~all as described in http://support.google.com/a/bin/answer.py?hl=en&answer=178723 but I still get the same result. How can I solve this and prevent "via server" showing?

    Read the article

  • .htaccess url rewriting problem

    - by letsworktogether
    I'm kind of stuck at this part and was hoping that I'd get some assistance. I'm building a highscores page in PHP, that's going great, it works. however, I dislike the idea of "index.php?skill=name" and therefore wanted a bit of SEO in this. I have successfully replaced the url with a more friendly version: "highscores/skill/name" And this is where the problem starts, I have added pagination to the highscores and the page is read from the HTTP_GET page variable ($_GET['page']). I dislike the idea of "highscores/skill/name&page=2" and was hoping if you guys could assist me to make the url like the following: Page 1, so accessing the file without declaring the page number: DOMAIN.TLD/highscores/skill/name Page 1 so now the page variable is needed:DOMAIN.TLD/highscores/skill/name/2 As you can tell the "2" will define page 2 and load the correct data for page 2. However, I'm having much trouble in my .htaccess file to configure it this way. RewriteRule ^highscores\/skill\/(.*?)(\/(.*?)*)$ highscores/skills.php?skill=$1&page=$2 [L] # Skills page That is my latest attempt in order to get it to work, unfortunately it does not work, it makes the page look horrible (CSS doesn't work) and it doesn't go to the page specified on the URL. I hope you understand my issue, thank you!

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >