Search Results

Search found 9699 results on 388 pages for 'htm links'.

Page 42/388 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How do I delete hardlinks, symbolic links, junction points, etc please?

    - by jonny
    I could be wrong, but I'm yet to hear a valid argument for the exploitability that these things deliver...outweighing their very dubious / debatable functionality. They seem to me to be marginally handy, but I don't think I have any need for them. I do have a need for security, however. How can I delete their entire functionality permanently from my hard drive, please? Microsoft only has pages on how to create them; which seems almost peculiar to the point of being dubious (at least, to me...) And just a dumb command line question, am I correct in assuming fsutil hardlink list c: will enumerate every single hardlink on that drive? C:\Windows\system32>fsutil hardlink list c: \Windows\System32 Also, how do I delete symbolic links please ;) But I'd just rather have all symbolic linking and recursion-creating stuff removed, if that's possible? C:\Windows\system32>fsutil behavior query symlinkevaluation Local to local symbolic links are enabled. Local to remote symbolic links are enabled. Remote to local symbolic links are disabled. Remote to remote symbolic links are disabled.

    Read the article

  • How to create a good sitemap for dynamic website

    - by Saif Bechan
    I have a website with dynamic content and different kind of pages. I have some pages that rarely change, and I have pages like blogs that change often. The blog pages also have links for sorting, for example sorting on date, asc, desc. On some of the pages I also have links to different tabbed content, and links that are just anchor links. Now when I use a xml sitemap generator then all the links are thrown into the site, and so I don't think all the links are really relevant. The blogposts up until now are also taken into the sitemap. Is this really necessary? I think the links to the blogposts can be indexed just fine. Is the best way to make a sitemap just to manually assign the main menu links to the sitemap, or is indexing everything really recommended?

    Read the article

  • Is It bad for SEO to have internal redirected links? [closed]

    - by Jonas Lindqvist
    I have a large number of pages having similar but not identical content. Example: site.com/dream_dictionary_flying and site.com/dream_interpretation_flying. The problem is that although not being identical, they are sometimes on the edge of being duplicate content. The solution via redirect 301 in htaccess is simple and can be done in a minute, BUT, changing all existing links on the whole site from "/something" to "/something_else" would take ages, it would be thousands of manual changes taking x hundreds of hours. My question is this; is it bad for SEO to have internal links that are redirected, or rather HOW bad is it? For the human user it would not matter at all but from what I have experienced, the search engines don't like it. Is there any rule of thumb here? Please come back with your thoughts and experience on this. Thanks!

    Read the article

  • Will many links to the same page without nofollow penalize the host site in the search engine rankings?

    - by Evgeny
    May be a silly question, but I'll give it a shot :). On my forum app I would like to allow users with sufficiently high reputation display links to their home pages under every post - without the nofollow attribute (while lower rep users will have the nofollow) I am happy to help the site contributors improve rankings of their own, but not sure if this can actually deteriorate the rank of the host (the site that hosts those links) - as potentially the same link to the user's home page may be peppered in the pages of the host. What do you think? Thanks.

    Read the article

  • Should mobile webpages have hreflang links to non-mobile pages?

    - by Noam
    My site has multilingual links, which are specified like this on non-mobile pages: <link rel="alternate" hreflang="en" href="http://mydomain.com/page" /> <link rel="alternate" hreflang="jp" href="http://ja.mydomain.com/page" /> <link rel="alternate" hreflang="ko" href="http://ko.mydomain.com/page" /> In addition, these non-mobile pages link to a mobile version: <link rel="alternate" media="only screen and (max-width: 640px)" href="/mobile/page" /> Now the question is about what links should be in the mobile page, which isn't translated to different languages now. Is this enough: <link rel="canonical" href="/page"/> Or should I also have the same group of hreflangs that point to non-mobile pages?

    Read the article

  • Multi language switch links translated or in current language?

    - by FFish
    Should I do: A: translate the language links in the current language: (if I am on the English version) <a href="en/">English</a> | <a href="it/">Italian</a> | <a href="fr/">French B: the links in the native languages: <a href="en/">English</a> | <a href="it/">Italiano</a> | <a href="fr/">Français</a> From a user perspective option B is obvious, but what about SEO?

    Read the article

  • Do navigation menu links negatively impact SEO for pages' content?

    - by Rodolfo
    I've always had my doubts about navigation menus effect on SEO. You know, the vertical menus on the top that show in every page in the site linking to main sections and subsections. My issue is that if not done dynamically (i.e. after page is loaded or something), from a search engine's point of view it probably looks like a whole bunch of links in the beginning of the page, and links that probably have nothing to do with the page being analyzed, so it's probably not only confusing it, but also giving link 'juice' to the wrong pages or reducing its value. When I've asked SEO people about this, I usually get a "Google is smart, they'll recognize it as a menu and ignore it" response, but I'm not convinced (and the 'Google is smart' argument sounds almost like religion discussion to me). So does it affect SEO negatively or not? Are there any official posts on this topic?

    Read the article

  • How to find out top links in a website?

    - by Anil
    I want to know what are the best links in a site are? It may be pagerank wise or popularity wise. For example http://www.pragprog.com is a site. I want to find out what are the most relevant links this site has. The links should not be external pointing links. It should be of the same site. Do google or any similar site can tell such information?

    Read the article

  • Need to parse HTML document for links-- use a library like html5lib or something else?

    - by Luinithil
    I'm a very newbie webpage builder, currently working on creating a website that needs to change link colours according to the destination page. The links will be sorted into different classes (e.g. good, bad, neutral) by certain user input criteria-- e.g. links with content the user would find of interest is colored blue, stuff that the user (presumably) doesn't want to see is colored as normal text, etc. I reckon I need a way to parse the webpage for links to the content (stored in MySQL database), change the colors for all the links on the page (so I need to be able to change the link classes in the HTML as well) before outputting the adapted page to the user. I read that regex is not a good way to find those links-- so should I use a library, and if so, is html5lib good for what I'm doing?

    Read the article

  • How to save some values from an array in a controller in Rails?

    - by Alfred Nerstu
    I've got a links array that I'm saving to a database. The problem is that the records aren't saved in the order of the array ie links[1] is saved before links[2] and so on... This is a example from the view file: <p> <label for="links_9_label">Label</label> <input id="links_9_name" name="links[9][name]" size="30" type="text" /> <input id="links_9_url" name="links[9][url]" size="30" type="text" /> </p> And this is my controller: def create @links = params[:links].values.collect { |link| @user.links.new(link) } respond_to do |format| if @links.all?(&:valid?) @links.each(&:save!) flash[:notice] = 'Links were successfully created.' format.html { redirect_to(links_url) } else format.html { render :action => "new" } end end end Thanks in advance! Alfred

    Read the article

  • How to sort some values from an array in a controller in Rails?

    - by Alfred Nerstu
    I've got a links array that I'm saving to a database. The problem is that the records aren't saved in the order of the array ie links[1] is saved before links[2] and so on... This is a example from the view file: <p> <label for="links_9_label">Label</label> <input id="links_9_name" name="links[9][name]" size="30" type="text" /> <input id="links_9_url" name="links[9][url]" size="30" type="text" /> </p> And this is my controller: def create @links = params[:links].values.collect { |link| @user.links.new(link) } respond_to do |format| if @links.all?(&:valid?) @links.each(&:save!) flash[:notice] = 'Links were successfully created.' format.html { redirect_to(links_url) } else format.html { render :action => "new" } end end end Thanks in advance! Alfred

    Read the article

  • How do I do this simple 301 redirect from index.htm to root?

    - by elliot100
    I've read various reference sites on redirection, and to be honest I understand very little. I currently have standard WordPress mod_rewrite redirect rules in my .htaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress (1) Quite a few of my referrers go to a old URL http://www.example.com/index.htm, which gives them an error, and I want them to be redirected to http://www.example.com/ What do I need to do?

    Read the article

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • can canonical links be used to make 'duplicate' pages unique?

    - by merk
    We have a website that allows users to list items for sale. Think ebay - except we don't actually deal with selling the item, we just list it for sale and provide a way to contact the seller. Anyhow, in several cases sellers maybe have multiple units of an item for sale. We don't have a quantity field, so they upload each item as a separate listing (and using a quantity field is not an option). So we have a lot of pages which basically have the exact same info and only the item # might be different. The SEO guy we've started using has said we should put a canonical link on each page, and have the canonical link point to itself. So for example, www.mysite.com/something/ would have a canonical link of href="www.mysite.com/something/" This doesn't really seem kosher to me. I thought canonical links we're suppose to point to other pages. The SEO guy claims doing it this way will tell google all these pages are indeed unique, even if they do basically have the same content. This seems a little off to me since what's to stop a spammer from putting up a million pages and doing this as well? Can anyone tell me if the SEO guy's suggestion is valid or not? If it's not valid, then do i need to figure out some way to check for duplicated items and automatically pick one of the duplicates to serve as an original and generate canonical links based off that? Thanks in advance for any help

    Read the article

  • Can I use nofollow for offsite links without it affecting my page rank?

    - by Jack
    What I have is a page with almost all offsite links. Each clicked link is forwarded on to the destination. What I would like the search engines to do is to index the text between the anchor tag and not follow the link itself. <a href="somelink">Index This Text Only</a> I've read several articles and they all seem to contradict themselves as to when to use nofollow. What's been happening over the past 2 months that the site has been live is that both Google and Bing are crawling the site as well as all the links on the site that it has been forwarded to. The search engines are now generating a lot of 404s for images and files that never existed on my site but rather seems to correlate to the site it was forwarded to. The search engines don't seem to honor the 302 header when forwarded. I would like to get a definitive answer on the nofollow tag as it relates to my situation. Can I use nofollow to stop the 404s and if so, will it affect my page ranking negatively?

    Read the article

  • How do I view Visual Studio BuildLog.htm files without cutting and pasting into an external browser

    - by bgoodr
    This may or may not be specific to VS2005 (as that is the version I'm referring to for this question). I find often the case is that I see this in the Output panel inside Visual Studio 2>Build log was saved at "file://c:\\vsdll_example\MyExecRefsDll\Debug\BuildLog.htm" Now, since that looks and smells like a URL, I would have thought that I could simply left mouse click on it, or left mouse double-click on it, and a browser window of some sort would be displayed. No, that doesn't work. So, to view it, I have to cut and paste the "file://bla/bla/bla" part into an external window. Is there a way to set up Visual Studio to allow me to browse to that file directly, or view it inside Visual Studio IDE, or something to that effect, without the extra fiddling with cutting and pasting? Or is there some type of keybinding I'm not aware of? Thanks, bg

    Read the article

  • 500.19 error in IIS7 when an error occurs

    - by Joel
    Setup: windows 7, IIS7. I am working on an app that is being viewed through the local IIS server, not the built in debugging web server. There is NO <customErrors> section in my web.config. When an error occurs, i see the following message: HTTP Error 500.19 - Internal Server Error Absolute physical path "C:\inetpub\custerr" is not allowed in system.webServer/httpErrors section in web.config file. Use relative path instead. I havent changed any settings of IIS7, so i don't know why this is occurring. When i go to applicaitonhost.config, i see <httpErrors errorMode="Custom" lockAttributes="allowAbsolutePathsWhenDelegated,defaultPath"> <error statusCode="401" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="401.htm" /> <error statusCode="403" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="403.htm" /> <error statusCode="404" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="404.htm" /> <error statusCode="405" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="405.htm" /> <error statusCode="406" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="406.htm" /> <error statusCode="412" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="412.htm" /> <error statusCode="500" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="500.htm" /> <error statusCode="501" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="501.htm" /> <error statusCode="502" prefixLanguageFilePath="%SystemDrive%\inetpub\custerr" path="502.htm" /> </httpErrors> How can I get rid of this configuration error so i can see detailed errors?

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • How do I open "apt" links in Firefox on Lubuntu?

    - by cipricus
    Many answers on Ask Ubuntu direct to links like this that in Xubuntu opened in Ubuntu Software Center. In Lubuntu I get this error message: In Firefox-Preferences/Applications cannot see something resembling to apt to associate to a program etc. Opening the same link in Chromium or Opera I get: Clicking "I'm running Ubuntu" results in an error message like in Firefox. What's the remedy? Can I install Ubuntu Software Center?

    Read the article

  • How come Indiegogo links shared on G+ link to their page instead of displaying URL?

    - by Ivan Vucica
    If an Indiegogo link, such as this one, gets shared on G+, their G+ page is displayed in the post in the place where commonly the URL would be displayed. I've tried looking analyzing the HTML, but came up empty handed: there's Twitter cards metadata, there's OpenGraph, there is a G+ button -- but I found nothing that links to Indiegogo's page, not even rel="publisher". So, how does Indiegogo achieve this?

    Read the article

  • Are One Way Links Still Important in Search Engine Optimization?

    Pretty dumb question huh? But people are beginning to wonder considering that Google might change its algorithms again. If you doubt it, do a quick search on the keyword "Google caffeine". This is the new Google search engine and so far, beta testers have stated that it is faster, provides more relevant search engine results and son. Anyways, whatever the case may be, it is important to note that one way links are important right now because the search engines have made them so.

    Read the article

  • Can I use other ways than Ubuntu Software Center to open `apt` links?

    - by cipricus
    Without Ubuntu Software Center in Lubuntu I was unable to edit opening apt links in any program in Firefox (see this question) After installing Ubuntu Software Center, that problem is solved, but could I use other program instead of Ubuntu Software Center for the same purpose? I find it too heavy, and to install I prefer the Terminal, gdebi, Lubuntu Software Center or the Synaptic. (Now that I have the apt option in Firefox/Preferences/Applications, I try to change Ubuntu Software Center to Lubuntu Software Center but this does not change the option.)

    Read the article

  • Should I disallow(robots.txt) archive/author pages with links already available on the front page? [on hold]

    - by WPRookie82
    I am working on a simple Wordpress blog where when an article is published, it appears on ALL these pages: Homepage - Headline(clickable) + 3-line summary Parent category page - Headline(clickable) + 3-line summary Child category page - Headline(clickable) + 3-line summary Author page - Headline(clickable) sitemap.xml I've been told that I should add all author pages to my robots.txt, under disallow, so as search engine bots do not spider /author/* since all links on these pages are available elsewhere. Is this a good approach or maybe rel=nofollow is better, or maybe I shouldn't worry about this at all?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >