Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 345/1508 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • What is the correct frequency of changing content regularly?

    - by SSRB
    What is the correct frequency of changing content regularly? Suppose I have a site "Seven Sea" having 5 links name as Home, About Us, Product, Sitemap, Contact Us. It is good for site to change the site content regularly. But is there is any minimum and maximum frequency for do this job. Suppose I do change my content daily then is that good for SEO point of view. OR suppose I change my content once in a year Is that bad for SEO. What is best or more better choice? A REQUEST: If this type of question already answered then give me that answered link and do not close the question.

    Read the article

  • Free Website Content - Do Articles From Directories Work Anymore? Part 2

    A clever strategy for many SEO experts is to study a site that is ranked highly and then try to copy what those sites do to get so successful. Take a close look at highly ranked sites and you will notice that virtually all of them have a very high number of links pointing to other sites. Let me give an example of a site that is ranked very highly and is exclusively made up of links pointing to other sites, billions of them in fact. I am talking about a site that receives over 100 million hits daily. Learn their secrets in this article.

    Read the article

  • Does e-commerce platform matter for customers

    - by c s h
    The place I work is now looking into developing a new e-commerce site on the Magento platform. Magento will fill all of our needs. I was just wondering if it is in anyway unprofessional doing it this way (Impression is something we are really worried about), will people who visit the site look at our business different knowing we used Magento or any other e-commerce platform. There are ways to find out. I use Chrome Sniffer to find out what platforms are used to develop each site, there are other tools available for different browsers.

    Read the article

  • Advantages of country TLD vs. .com

    - by Tschareck
    I want to get a domain for my site. The site's topic would be about Vienna, but the content will be in English. I was thinking, if I should get .com domain or .at domain. .at is both much cheaper and easier to get (there is less chance that my desired phrase is already registered). Is there any disadvantage in terms of SEO and page rank, if my domain does not end with .com? The site will be in English and targeted not just for Austria, but globally, mostly foreign tourists. I don't care if it's easy to remember the address, I expect most traffic to be from search engines anyway.

    Read the article

  • Subdomain Is Redirected and Causing an Error Because www. is Added

    - by user532493
    On my site, say example.com, if I try to access test.example.com, Firefox automatically adds www. to test.example.com, making it www.test.example.com, which causes an error. However, if I visit a site like my.ebay.com, no www. is added so no error occurs. What's going on? Just in case, my .htaccess file is as follows: Options -Multiviews RewriteEngine On RewriteCond %{HTTP_HOST} ^example.com RewriteRule ^(.*)$ http://www.example.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ $1.php I looked through Firebug and it seems like Firefox doesn't even make an attempt to reach test.example.com before reverting to www.test.example.com. One nuance I didn't realize until now. If I try to access the site using test.example.com/, the www. is added. If however that trailing slash is not there, then I am sent to the subdomain properly.

    Read the article

  • how to avoid getting negative points from google adsense

    - by Napster
    I have news based website, in which primary contents include news,image albums and videos. out of these i have copy rights for images and videos are just youtube embedded videos. Coming to news my site is kind of a mashup. It gathers data from various sites and presents them in more user friendly way for quick digestion and access. My problem here is that since the news part of the site can be found from other sites, my site could suffer in search rankings. Is there any solution to this. One thing I thought of is to put disallow on all the news ariticles pages, so google does not crawl them. Will this be helpful to me. When applying to google adsense does google crawl these pages (disallow) also.

    Read the article

  • AdSense link units targeting better keywords?

    - by Unavailable
    I have one link unit above my navigation which targets exact keywords on what my site is about, on contrary my AdSense for content Leaderboard and Wide Skyscraper show ads that are totally opposite from my keyword, while my ctr increased for about 10 times with link unit I noticed CPC is lower also (but that was just based on one day, today is higher). Is it so bad to completely get rid of standard content ads and put link units, because this way I earn so little for that site but I think that is because smart pricing (accidental clicks on ads that my users are not interested)? I earn in whole day with about 25-35 clicks as much as keyword tool shows for only one click, my site is first for the topic? I really don't know what to do. Has anyone had similar situation or can give some advice?

    Read the article

  • Joomla url issue with sh404SEF

    - by user5858
    it's couple of months I've been using SH404SEF With my site. But in my site I'm getting url's in the form: http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view If I remove this suffix(?task=view), it takes us to the same page. I had raised this issue in SH404SEF forum, and I was told that this data is taken as parameter by search engines hence ignored. I want to redirect using RewriteMatch in .htaccess all such url's to the url's without ?task=view ones : ....downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html?task=view to be redirected to http://www.downloadformsindia.com/Income-Tax-Forms/all-income-tax-return-itr-forms-2010-2011.html So my question is: Will this redirection create 404's in the Google webmaster. I've thousand's of pages in the site

    Read the article

  • Google lance Tag Manager, un outil gratuit qui facilite la gestion du suivi et des balises de marketing des sites Web

    Google lance Tag Manager un outil gratuit qui facilite la gestion du suivi et des balises de marketing des sites Web Google vient d'annoncer le lancement de Google Tag Manager, son nouvel outil pour la gestion des différentes balises dans un site Web. Pour mieux monétiser leur site Web et contrôler la manière dont le contenu est utilisé, les gestionnaires de sites ont recours à des outils de suivi de statistiques comme Google Analytics. Pour chaque service, un morceau de code doit être intégré dans chaque page du site. Bien que d'une utilisation relativement simple, la multiplication de ces bouts de code sur une page peut rendre leur gestion fastidieuse. De plus, les requêtes entr...

    Read the article

  • Bad links point to old domain - should I disavow on new domain?

    - by user32573
    I am working with a site which we'll call www.newdomain.com, which was hit by Penguin this month despite no unusual practices. I found lots of really spammy links to their old site, www.olddomain.com, which 301s to the new domain. So I've gone through the process of identifying which links are really bad, made contact to ask for removal, and am at the stage of disavowing links. But wait! None of the bad links point to newdomain.com, and I worry that a disavow request via this domain in Webmaster Tools will damage something. Do the old band links affect the new site? If so, where do I disavow those old bad links? On Webmaster Tools for the new domain?

    Read the article

  • How do I get the root index page to redirect to a subdirectory without affecting SEO?

    - by paradroid
    I am reviving/reorganising my personal WordPress blog. It's using a URL that looks like this: http://mydomain.com/blog The webserver 301 redirects www.mydomain.com to mydomain.com. I want to use the blog subdirectory because I plan to add other parts to the site, with the blog only being one part of the site. However, at the moment there is nothing there but the blog, so I want to have the root index page redirect to the blog for the time being. I have been using this on the root index.html page to do the redirect... <meta http-equiv="REFRESH" content="0;url=./blog"></HEAD> ...but this seemed to have stopped the site being indexed by Google and Bing. How do I do this without affecting SEO? Also, what URL should I put in the sitemap.xml?

    Read the article

  • Can I redirect the HTTP request towards an old folder to the homepage using .htaccess file?

    - by AndreaNobili
    I have to following situation: I had an old blog that was made using Joomla (this blog was indexed well enough by search engines). For some problems I delete it and I have create it again using WordPress. Now I have many visit (from Google) that leading to specific pages of the old site (pages that don't exist in the new version). For example I have visit to URL as: /scorejava/index.php/corso-spring-mvc/1-test that don't exist on my new site. I would know if using the .htaccess file (or other sistem) I can redirect the HTTP request directed to some subfolder (that don't exist in the new version) to the homepage of my new site. For example I have the request towards the void URL: /scorejava/index.php/corso-spring-mvc/1-test. And I would create a regular expression that say something like: all the request toward the subfolder corso-spring-mvc (and all it's content file and subfolder) have to be redirected to www.scorejava.com. Is it possible?

    Read the article

  • Getting "server certificate verification failed" during apt-get update

    - by mydoghasworms
    I am trying to update a system using an HTTPS package mirror located here: https://mirror.ufs.ac.za/os/linux/distros/ubuntu/ubuntu/ However, during apt-get update, I get the following message: Packages server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none If you visit the site in your browser, you are warned about the site's certificate, but I trust the site, so it's not an issue for me. I assume I must be able to add this exception somewhere for apt to proceed. Can you tell me where and how?

    Read the article

  • How to Build Business Links With SEO Value

    In order to create order and plan in the site and make it more identifiable by a business community around, SEO link building requires membership strategies that aid it in its automatic operations by following a universal hub with potential links. The starting point is to obtain a link with the national or regional authority of commerce through its website. This will give the site a licensed look and due to its frequent interaction with the government body it will be a confidence boost for other links. This will also help market it as a recognized site that is secure to communicate with.

    Read the article

  • Event Aggregator.. not getting a response, how to determine completion?

    - by Duncan_m
    I'm rewriting a vehicle tracking application, a google maps based thing.. The users are able to search for a vehicle by typing a few characters of the vehicles "callsign". My application is based around a sort of "event bus" within Backbone.. when a search occurs I send a message on the bus saying something like "does anyone match this?".. If a marker matches the search term it responds with a sort of "yes, I match!".. My challenge arises when no-one matches, I get no response.. it feels a little hacky to "wait a little while" and check if a response has been recieved.. The application is based around Backbone.js and using the Event Aggregator pattern described in the answer to this question on Stack Overflow: http://stackoverflow.com/questions/7708195/access-function-in-one-view-from-another-in-backbone-js Is there a well defined design pattern that might assist me here? Sending a request for a response and not getting any responses?

    Read the article

  • Is the use of hashbang really a good idea? [on hold]

    - by user32642
    I've been working on a WordPress site lately that was design with hashbang or shebang in the dynamically generated URLs. After doing some research, I noticed that there was some preference by Google in their use and how it crawled the site. However, after I ran several sitemap generators and Screaming Frog SEO Spider, I realized that the only page being crawled was the index page. So now I am questioning the use of hashbangs. What do you think? Should I attempt to remove them? Or will it even matter? And does anyone know of a easy way to remove this? The site is www.modernvintage1005.com

    Read the article

  • How do I fix the paths of my website?

    - by EASI
    I have Joomla 2.5.7 in my client's server updated recently from 1.6 to 1.7. I did not make that site but I am responsible for it now. I prefer to make a site from zero. Now users area clicking on the menu options and when joomla send them to a meta-url like http://iap.pa.gov.br/acervo they get the message 404 The requested URL /acervo was not found on this server. Would that be because they moved joomla folder from the root to root/iap (name of the site)? If it is, what is the configuration to adapt to that new folder?

    Read the article

  • Why are new pages not being indexed and old pages stay in the index?

    - by ZakGottlieb
    I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?

    Read the article

  • Can't load Wordpress static files on home network

    - by Tosho
    I've just installed Wordpress 3.5 on my laptop (LAMP on Ubuntu 12.10) and when I'm trying to access the site from my phone but it doesn't load static files (css and images). I tried with Opera Mobile Emulator on my laptop and it works perfectly. I also have another Drupal site on my localhost which I can load from my phone without any issues. Both directories have chmod 777 permissions. What can cause that? Just tried to open the site from my sister's laptop but it except static file I can't access any post or page.

    Read the article

  • .htaccess mobile redirect issues

    - by val
    I'm trying to set up a mobile redirect for a site with 2 subfolders, and I cannot get both to work at the same time. This is the structure of the site www.mysite.com/EN/ www.mysite.com/ES/ This is a bilingual site so each subfolder contains the files corresponding to each language version. Then I was using a 301 redirect, and setting up the index in /EN/ as the main index. Everything was getting redirected to it. I was using DirectoryIndex index.html Redirect /index.html http://www.mysite.com/EN/index.html and several Rewritecond to redirect mysite.com and old urls to the new URL. It worked fine before I decided to add a mobile version - m.mysite.com. I used the solution provided in http://stackoverflow.com/questions/3680463/mobile-redirect-using-htaccess, and it redirects my mobile version properly, but now the desktop is both working. Besides, my mobile version must be bilingual as well.

    Read the article

  • display Google display ads to visitors who have visited certain web sties

    - by Source
    For Google Adwords re-marketing, display ads are shown to visitors have been to your site previously. So when the go to a web site displaying adsense, it is likely that re-marketing ad will be displayed to them. Is there a way to do the same for if a visitor has visited a competitors site. i.e. if a visitor goes to one of my competitor sites, I want the display ads they see, to be mine. Is that possible?

    Read the article

  • Crossbrowser issue - navigation-menu [closed]

    - by aztekk
    I'm having issues with crossbrowser compatibility on my navigationmenu for my site. The issue is that it's not working as expected in MSIE. It bugs out on mouseover. The site is run with wordpress and the theme is called GreenChilli. It's a free theme from MyThemeShop and they don't seam to be very active in resolving free theme issues on their forum. Can someone have a look and see if this is an easy fix, or if I maybe have to abaondon this theme for something else? Site is: http://lamslagen.com

    Read the article

  • Is there any problem with using two slashes in the middle of a URL? [closed]

    - by joshuahedlund
    Possible Duplicate: What does the double slash mean in URLs? I'm working on a mod_rewrite URL structure as follows: http://example.com/search/filter1/filter2/filter3/filter4 There are some conditions where it is OK for the first attribute to be blank, but i want to keep the other attributes in the same position. (Otherwise I can't assume that the attribute in the second position represents what I want it to represent.) However this results in some URLs like this: http://example.com/search//filter2/filter3/filter4 This seems to work in all browsers I've tested (Chrome,Firefox,IE9,IE compatible) and I'm not seeing any errors on the server side, so I can't think of any problems in using it. But it just looks wrong and weird to me and I'm not used to seeing it. Are there any potential downsides to using a structure that encourages URLs like this, or any major reasons no one seems to use it? (Everything I search in Google assumes I'm talking about the two slashes after http:)

    Read the article

  • How do I forward/redirect a website from a folder in a subdomain to another server?

    - by dozza
    I have a client with a site at: subdomain.theirdomain.com/folder It's a 50Gb gallery site that i've now cloned and have locally in MAMP. Once i've made some changes to it I need to host it at alternative physical server/hosting (I intend using a dedicated server I have access to with a technical domain name currently). However, the client would ideally like to keep the existing URL as it has been used extensively in marketing. I've done HTTP redirects and forwards and 301 redirects in the past, but I'm not sure how or even if I can do what the client wants. How can I achieve this, possibly using .htaccess and DNS entries? Caveats: I can't have the site at a 3rd party domain and the client isn't able/allowed to register any additional domains:

    Read the article

  • Two HTML elements with same id attribute: How bad is it really?

    - by danludwig
    Just browsing the google maps source code. In their header, they have 2 divs with id="search" one contains the other, and also has jstrack="1" attribute. There is a form separating them like so: <div id="search" jstrack="1"> <form action="/maps" id="...rest isn't important"> ... <div id="search">... Since this is google, I'm assuming it's not a mistake. So how bad can it really be to violate this rule? As long as you are careful in your css and dom selection, why not reuse id's like classes? Does anyone do this on purpose, and if so, why?

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >