Search Results

Search found 19109 results on 765 pages for 'canonical url'.

Page 12/765 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Ars Technica .ars URL suffix -- Vanity or SEO Benefit?

    - by yc01
    The Technology website Ars Technica has adjusted their URL rewrite rules to end with a .ars. Traditionally, sites have taken advantage of this URL rewriting capability to completely eliminate file suffixes like .html, .php, .aspx etc, under the theory that this made for better SEO (since the content of the URL was more relevant to the content) Ars Technicas, though, look like this: http://arstechnica.com/science/news/2011/03/flow-from-the-poles-drive-sunspot-levels.ars So, is Ars Technica adding the .ars file suffix purely a vanity play? Or is it an SEO trick to improve the site's SEO by cleverly inserting their site name into every URL slug? And, if this is indeed an effective SEO trick, should other sites follow suit?

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • [Python] Detect destination of shortened, or "tiny" url

    - by conradlee
    I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article. Given that I have millions of posts, what is the most efficient way (preferably in python) for me to detect whether a url is a shortened URL (from any of the many URL shortening services, or at least the largest) Find the "destination" of the shortened url, i.e., the long, original version of the shortened URL. Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?

    Read the article

  • ASP.Net URL rewriting and redirecting....

    - by DDiVita
    I am trying to wrap my head around a URL rewrtie / redirect project I need to work on. We currently have this url: www.domain.com/Details/Detail.aspx?param1=8&param2=12345 Here is what the rewritten URL will look like: www.domain.com/Param1/8/Param2/12345 I am using the ISAPI_Rewrite filter to allow for the "nice" url and make the page think it is still using the old url. That works fine. Now, I need to redirect users, if they use the old URL, to the new URL. I fgiure I would need to use a combination of the filter and an HTTPModule / Handler to perfomr the redirect. Any ideas?

    Read the article

  • Transforming url with URI-module

    - by sid_com
    Do I gain something when I transform my $url like this: $url = URI-new( $url )? #!/usr/bin/env perl use warnings; use strict; use 5.012; use URI; use XML::LibXML; my $url = 'http://stackoverflow.com/'; $url = URI->new( $url ); my $doc = XML::LibXML->load_html( location => $url, recover => 2 ); my @nodes = $doc->getElementsByTagName( 'a' ); say scalar @nodes;

    Read the article

  • ASP.Net URL rewirting and redirecting....

    - by DDiVita
    I am trying to wrap my head around a URL rewrtie / redirect project I need to work on. We currently have this url: www.domain.com/Details/Detail.aspx?param1=8&param2=12345 Here is what the rewritten URL will look like: www.domain.com/Param1/8/Param2/12345 I am using the ISAPI_Rewrite filter to allow for the "nice" url and make the page think it is still using the old url. That works fine. Now, I need to redirect users, if they use the old URL, to the new URL. I fgiure I would need to use a combination of the filter and an HTTPModule / Handler to perfomr the redirect. Any ideas?

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... I want to present website users only friendly URLs. When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Domain name rewriting and URL rewriting in the meantime in .htaccess

    - by Steven
    Ugly URLs: www.domainname.com/en/piece/piece.php?piece_id=1 www.domainname.com/en/piece/piece.php?piece_id=2 www.domainname.com/en/piece/piece.php?piece_id=3 ... Friendly URLs: piece.domainname.com/en/1 piece.domainname.com/en/2 piece.domainname.com/en/3 ... When I apply RewriteEngine On RewriteRule ^en/([^/]*)$ /en/piece/piece.php?piece_id=$1 [L] only the URL is rewrote.Besides the CSS file can not be found in web page of the friendly URL. How to rewrite the domain name and the URL in the meantime? Do I have to use RedirectMatch, if so, how to do it?

    Read the article

  • Can I use Zoneedit to do URL rewrite?

    - by chilly-child
    This is our scenario: Our DNS is hosted by a company. They don't manage the DNS. We use Zoneedit (www.zoneedit.com) to manage the DNS such as nameservers, CNAMEs, etc... Then we have our web host where we just have our files hosted. We have a subdomain created on zoneedit. We would like to do a URL rewrite so that subdomain.ourdomain.com is displayed as www.ourdomain.com/subdomain. Do I use Zoneedit to do the URL rewrite or the web host or the DNS host? I checked the Zoneedit docs but I could not find a way to do a URL rewrite. Need some advice. Thanks

    Read the article

  • Disabling URL decoding in nginx proxy

    - by Tomasz Nurkiewicz
    When I browse to this URL: http://localhost:8080/foo/%5B-%5D server (nc -l 8080) receives it as-is: GET /foo/%5B-%5D HTTP/1.1 However when I proxy this application via nginx: location /foo { proxy_pass http://localhost:8080/foo; } The same request routed through nginx port is forwarded with path decoded: GET /foo/[-] HTTP/1.1 Decoded square brackets in the GET path are causing the errors in the target server (HTTP Status 400 - Illegal character in path...) as they arrive un-escaped. Is there a way to disable URL decoding or encode it back so that the target server gets the exact same path when routed through nginx? Some clever URL rewrite rule?

    Read the article

  • Squid url rewrites https>>http

    - by bobfran
    I'm exploring some uses with Squid proxy 2.7 and I have seen a good number of examples for url rewrites that take urls such as: http: //somesitename.com and then the rewriter can change the url to: https: //somesitename.com And those examples work great. What I'm wondering though, is if its possible to do the reverse with a squid url rewriter. that is, to go from https: //somesitename.com to http: //somesitename.com ? Simply trying to edit the script file that handles the rewrites doesn't seem to do the trick. So I was wondering if there are some certain things I have to configure squid to do first, if its even possible to do what I am asking. I have my browser manually set up to have squid as a proxy for all requests and I can see https requests showing up in my squid access.log file (via the CONNECT method).

    Read the article

  • Shortcut to open URL from clipboard

    - by good boy
    I want to create a shortcut that opens links from the clipboard. I frequently switch browsers and it's very annoying to copy/paste hundreds of URLs from one to another. I have created a shortcut to launch a page on each browser - but how can I make the URL field include data from clipboard, so that when I copy a URL and click on the shortcut, it will direct to the URL that is currently on the clipboard. If this is not possible, then is there an AutoHotKey script or something similar that can accomplish this? I would prefer a Desktop shortcut, but whatever works.

    Read the article

  • Remove Trailing Slash from WordPress URL (The site also don't have www)

    - by mrintech
    I need help as I am confused a lot with .htaccess Some months back, I removed WWW from the URL of my domain name using following .htaccess lines: RewriteCond %{HTTP_HOST} !^example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Now, I also want to remove the trailing slash from the URL, as because I am using WordPress and a page/post will open, no matter if there's a trailing slash or NOT! I request you to please provide me the .htaccess code, so that I can REMOVE the trailing slash. Kindly remember, I don't want WWW also and I have already set .htaccess rule for the removal of WWW Note: 3 Years back when I started the blog, I set the Permalinks Structure without trailing slash. Now, suddenly Google Webmasters Tools is showing warnings. Also, the URL for rel="canonical" is WITHOUT trailing slash If you require any more details, I will be happy to provide

    Read the article

  • How to get/obtain Variables from URL in Flash AS3

    - by Leon
    So I have a URL that I need my Flash movie to extract variables from: example link: http://www.example.com/example_xml.php?aID=1234&bID=5678 I need to get the aID and the bID numbers. I'm able to get the full URL into a String via ExternalInterface var url:String = ExternalInterface.call("window.location.href.toString"); if (url) testField.text = url; Just unsure as how to manipulate the String to just get the 1234 and 5678 numbers. Appreciate any tips, links or help with this!

    Read the article

  • jquery - get url path?

    - by KittyYoung
    I know I can use window.location.pathname to return a url, but how do I parse the url? I have a url like this: http://localhost/messages/mine/9889 and I'm trying to check to see if "mine" exists in that url? So, if "mine" is the second piece in that url, I want to write an if statement based on that... if(second argument == 'mine') { do something }

    Read the article

  • Url for SEO Link

    - by k0ni
    Hi, i need a function (c#) or regular expression that makes me a nice URL out of a string. (and replaces invalid characters) Something like here on stackoverflow.. example: Short URL or long URL for SEO - short-url-or-long-url-for-seo Thanks

    Read the article

  • Update Manager unable to get updates

    - by dPEN
    In last few days my Ubuntu 11.10 update manager is unable to get new updates. When I checked update log I saw that for couple of updates it says "Network isn't available". For other updates it downloaded logs and and internet connection also works fine. Unable to attached screenshot due to SPAM prevention policy. But for below Release gpgv:/var/lib/apt/lists/partial/extras.ubuntu.com_ubuntu_dists_oneiric_Release.gpg it says "Network isn't available" For all other Releases it is downloading fine. And due to this I dont see any update available in last 10 days. LOG OF sudo apt-get update: dipen@EIDLCPU1018:~$ sudo apt-get update [sudo] password for dipen: Ign http:/extras.ubuntu.com oneiric InRelease Ign http:/archive.canonical.com oneiric InRelease Ign http:/archive.canonical.com lucid InRelease Get:1 http:/extras.ubuntu.com oneiric Release.gpg [72 B] Get:2 http:/archive.canonical.com oneiric Release.gpg [198 B] Hit http:/extras.ubuntu.com oneiric Release Get:3 http:/archive.canonical.com lucid Release.gpg [198 B] Err http:/extras.ubuntu.com oneiric Release Hit http:/archive.canonical.com oneiric Release Ign http:/archive.canonical.com oneiric Release Hit http:/archive.canonical.com lucid Release Ign http:/archive.canonical.com lucid Release Ign http:/archive.canonical.com oneiric/partner i386 Packages/DiffIndex Ign http:/archive.canonical.com oneiric/partner TranslationIndex Ign http:/archive.canonical.com lucid/partner i386 Packages/DiffIndex Ign http:/archive.canonical.com lucid/partner TranslationIndex Hit http:/archive.canonical.com oneiric/partner i386 Packages Hit http:/archive.canonical.com lucid/partner i386 Packages Ign http:/dl.google.com stable InRelease Ign http:/archive.canonical.com oneiric/partner Translation-en_IN Ign http:/archive.canonical.com oneiric/partner Translation-en Ign http:/archive.canonical.com lucid/partner Translation-en_IN Ign http:/archive.canonical.com lucid/partner Translation-en Ign http:/in.archive.ubuntu.com oneiric InRelease Ign http:/in.archive.ubuntu.com oneiric-updates InRelease Ign http:/in.archive.ubuntu.com oneiric-security InRelease Get:4 http//dl.google.com stable Release.gpg [198 B] Get:5 http//in.archive.ubuntu.com oneiric Release.gpg [198 B] Get:6 http//in.archive.ubuntu.com oneiric-updates Release.gpg [198 B] Get:7 http//in.archive.ubuntu.com oneiric-security Release.gpg [198 B] Hit http:/in.archive.ubuntu.com oneiric Release Ign http:/in.archive.ubuntu.com oneiric Release Hit http:/in.archive.ubuntu.com oneiric-updates Release Err http:/in.archive.ubuntu.com oneiric-updates Release Hit http:/in.archive.ubuntu.com oneiric-security Release Ign http:/in.archive.ubuntu.com oneiric-security Release Ign http:/in.archive.ubuntu.com oneiric/main i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/restricted i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/universe i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric/multiverse i386 Packages/DiffIndex Hit http:/in.archive.ubuntu.com oneiric/main TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/universe TranslationIndex Ign http:/in.archive.ubuntu.com oneiric-security/main i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/restricted i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/universe i386 Packages/DiffIndex Ign http:/in.archive.ubuntu.com oneiric-security/multiverse i386 Packages/DiffIndex Hit http:/in.archive.ubuntu.com oneiric-security/main TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/restricted TranslationIndex Hit http:/in.archive.ubuntu.com oneiric-security/universe TranslationIndex Hit http:/in.archive.ubuntu.com oneiric/main i386 Packages Hit http:/in.archive.ubuntu.com oneiric/restricted i386 Packages Hit http:/in.archive.ubuntu.com oneiric/universe i386 Packages Hit http:/in.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http:/in.archive.ubuntu.com oneiric/main Translation-en Hit http:/in.archive.ubuntu.com oneiric/multiverse Translation-en Hit http:/in.archive.ubuntu.com oneiric/restricted Translation-en Hit http:/in.archive.ubuntu.com oneiric/universe Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/main i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/restricted i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/universe i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/multiverse i386 Packages Hit http:/in.archive.ubuntu.com oneiric-security/main Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/multiverse Translation-en Get:8 http//dl.google.com stable Release [1,347 B] Hit http:/in.archive.ubuntu.com oneiric-security/restricted Translation-en Hit http:/in.archive.ubuntu.com oneiric-security/universe Translation-en Get:9 http//dl.google.com stable/main i386 Packages [1,214 B] Ign http:/dl.google.com stable/main TranslationIndex Ign http:/dl.google.com stable/main Translation-en_IN Ign http:/dl.google.com stable/main Translation-en Fetched 3,821 B in 41s (91 B/s) Reading package lists... Done W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http:/extras.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 16126D3A3E5C1192 Ubuntu Extras Archive Automatic Signing Key <[email protected]> W: GPG error: http:/archive.canonical.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/archive.canonical.com lucid Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/in.archive.ubuntu.com oneiric Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http:/in.archive.ubuntu.com oneiric-updates Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: GPG error: http:/in.archive.ubuntu.com oneiric-security Release: The following signatures were invalid: BADSIG 40976EAF437D05B5 Ubuntu Archive Automatic Signing Key <[email protected]> W: Failed to fetch http:/extras.ubuntu.com/ubuntu/dists/oneiric/Release W: Failed to fetch http:/in.archive.ubuntu.com/ubuntu/dists/oneiric-updates/Release W: Some index files failed to download. They have been ignored, or old ones used instead. dipen@EIDLCPU1018:~$ LOG of sudo apt-get upgrade: dipen@EIDLCPU1018:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: ghc6-doc haskell-zlib-doc libghc6-zlib-doc 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. dipen@EIDLCPU1018:~$ /etc/apt/sources.list: deb http:/in.archive.ubuntu.com/ubuntu/ oneiric main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric multiverse deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-updates multiverse deb http:/archive.canonical.com/ubuntu oneiric partner deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security main restricted deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security universe deb http:/in.archive.ubuntu.com/ubuntu/ oneiric-security multiverse deb http:/extras.ubuntu.com/ubuntu oneiric main #Third party developers repository deb http:/archive.canonical.com/ lucid partner

    Read the article

  • Category to Page and blocking category url via robots.txt -Good for SEO?

    - by user2952353
    I am using a template which in the pages it allows me to add sidebars / more content under and above the content I want to pull from a category which is very helpful. If I create pages to display my categories content wont the page urls go in conflict with the category urls? By conflict I mean causing a duplicate content error? What I thought might help was to block from robots.txt the category urls of the blog ex. /category/books /category/music Would that be a good practice in order to avoid the duplicate content penalty? Any tips appreciated.

    Read the article

  • URL Routing with WebForms, what to do with additional querystrings like page number, filter, etc?

    - by Scott
    I am switching from Intelligencia's UrlRewriter to the new web forms routing in ASP.NET 4.0. I have it working great for basic pages, however, in my e-commerce site, when browsing category pages, I previously used querystrings that were built into my pager control to control paging. An old url (with UrlRewriting) would be: http://www.mysite.com/Category/Arts-and-Crafts_17 I have a MapPageRoute defined in global.asax as: routes.MapPageRoute("category-browse", "Category/{name}_{id}", ~/CategoryPage.aspx"); This works great. Now, somebody clicks to go to page 2. Previously I would have just tacked on ?page=2 as the querystring. Now, How do I handle this using web forms routing? I know I can do something like: http://www.mysite.com/Category/Arts-and-Crafts_17/page/2 But in addition to page, I can have filters, age ranges, gender, etc. Should I just keep defining routes that handle these variables, or should I continue using querystrings and can I define a route that allows me to use my querstrings like before?

    Read the article

  • REST/ROA Architecture - Send database search/querying/filter/sorting parameters in URL? (>, <, IN, e

    - by DutrowLLC
    I'm building a REST interface to my application using ROA (Resource Oriented Architecture) . I'd like to give the client the ability to specify search parameters in URL. So a client could say "Give me all people who's: "first_name" is equal to "BOB" "age" is greater than "30" sort by "last_name" I was thinking something like: GET /PEOPLE/{query_parameters}/{sort_parameters} ...or perhaps GET /PEOPLE?query=<query_string>&sort=<sort_string> ...but I'm unsure what syntax would be good for specifying in the COLUMN_NAME-OPERATOR-VALUE triplicates. I was thinking perhaps something like: column_name.operator.value So the client could say: GET /PEOPLE?query=first_name.EQUALS.bob&query=age.GREATER_THAN.30&sort=last_name.ASCENDING I really don't want to re-invent the wheel here, are there some accepted ways that this is done? I am using Restlets, I don't know if that makes a difference.

    Read the article

  • Is it advisable to have non-ascii characters in the URL?

    - by Ravi Gummadi
    We are currently working on a I18N project. I was just wondering what are the complications of having the non-ascii characters in the URL. If its not, what are the alternatives to deal with this problem? EDIT (in response to Maxym's answer): The site is going to be local to specific country and I need not worry about the world wide public accessing this site. I understand that from usability point of view, It is really annoying. What are the other technical problem associated with this?

    Read the article

  • Mod_rewrite Mask URL

    - by user37143
    Hi, I need to rewrite https://mydomain1.com/chanagepassword to https://mydomain2.com/ssp then hide the url mydomain2.com/cp in browser and show https://mydomain1.com/chanagepassword. I got the first part working with the rule: Options +FollowSymLinks RewriteEngine on RewriteRule /changepassword https://mydomain2.com/ssp/ [R=301,L] However it shows https://mydomain2.com/ssp url on browser. How can I mask this and show https://mydomain1.com/chanagepassword Please support. Thanks.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • Curly braces in a URL

    - by lipton
    Often I have came across URLS like the following: http://www.isthisahacker.com/{7B643FB915-845C-4A76-A071-677D62157FE07D}.htm Do the curly braces in the URL above indicate some kind of attempt to access the registry, or is that a legitimate URL? It looks kind of suspicious to me.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >