Search Results

Search found 9935 results on 398 pages for 'pages'.

Page 171/398 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Googlebot can't access my site when crawling from rootdomain

    - by PéCé
    I can't explain why I get this message for my rootdomain result in Google : trocmalin.com/ A description for this result is not available because of this site's robots.txt – learn more. Here is my site specifics : vide-greniers.trocmalin.com is the site address www.trocmalin.com redirects (301) to vide-greniers.trocmalin.com trocmalin.com redirects (301) to vide-greniers.trocmalin.com too... User-agent: * Disallow: /orga/ User-agent: * Disallow: /sitemap-update Google results for vide-greniers.trocmalin.com are well rendered, as well as sub pages allowed for bots. But the result for my rootdomain (trocmalin.com) gives this message... Can you help me ?

    Read the article

  • Best practices for launching a new software version

    - by steve
    I rebuilt a web app to replace a version that we have been using for the last 3-4 years. We have a few thousand clients and a few hundred active users per day. The functionality is basically the same. The new version is a little bit faster with a few enhancement features and there are a lot of behind the scenes changes that the clients will never see. The UI is quite different but ultimately much easier to use and navigate. How should I go about having our clients stop using the old system and start using the new one? I am currently putting together a video that will play on the web site as well as within the app. The video will go through the pages and focus on some key changes. I was also thinking about an intro page that will display once the user logs in and explains some of the features.

    Read the article

  • Constructor should generally not call methods

    - by Stefano Borini
    I described to a colleague why a constructor calling a method can be an antipattern. example (in my rusty C++) class C { public : C(int foo); void setFoo(int foo); private: int foo; } C::C(int foo) { setFoo(foo); } void C::setFoo(int foo) { this->foo = foo } I would like to motivate better this fact through your additional contribute. If you have examples, book references, blog pages, or names of principles, they would be very welcome. Edit: I'm talking in general, but we are coding in python.

    Read the article

  • How to optimize a one language website's SEO for foreign languages?

    - by moomoochoo
    DETAILS I have a website that's content is in English. It is a niche website with a global market. However I would like users to be able to find the website using their own language. The scenario I envision is that the searcher is looking for the English content, but is searching in their own language. An example could be someone looking for "downloadable English crosswords." MY IDEAS Buy ccTLDs and have them permanently redirect to subdirectories on domain.com. The subdirectories would contain html sitemaps in the target language e.g.-Redirect domain.fr to domain.com/fr OR perhaps it would be better to maintain domain.fr as an independent site in the target language with the html sitemap linking to pages on domain.com ? QUESTION Are the above methods good/bad? What are some other ways I can optimize SEO for foreign languages?

    Read the article

  • URL-rewriting on Plesk using ISAPI_rewrite3 Lite

    - by Anusha
    I am using Plesk Windows based web server with Windows 2008 server OS with IIS-6 for my e-commerce website. I want to rewrite URLs for all dynamic pages, So I installed ISAPI_Rewrite 3 Lite on my web server also I had uploaded the .htaccess file with the basic rules as follows RewriteEngine on RewriteRule ^contact\.html$ contactus.php? [NC,R] I never worked before with ISAPI neither on URL- rewriting. My doubt is How should I proceed after installation. Should I upload .htaccess or httpd.conf file OR This s/w has ISAPI_Rewrite Manager which gives place to edit httpd.conf, Should I write rules on this. Anyways I had tried all these steps but unfortunately I couldn't find any remedies. Any immediate solution will be appreciable.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Salting a public hash

    - by Sathvik
    Does it make any sense at all to salt a hash which might be available publicly? It doesn't really make sense to me, but does anyone actually do that? UPDATE - Some more info: An acquaintance of mine has a common salted-hash function which he uses throughout his code. So I was wondering if it made any sense at-all, to do so. Here's the function he used: hashlib.sha256(string+SALT).hexdigest() Update2: Sorry if it wasn't clear. By available publicly I meant, that it is rendered in the HTML of the project (for linking, etc) & can thus be easily read by a third party. The project is a python based web-app which involves user-created pages which are tracked using their hashes like myproject.com/hash so thus revealing the hash publicly. So my question is, whether in any circumstances would any sane programmer salt such a hash? Question: Using hashlib.sha256(string+SALT).hexdigest() vs hashlib.sha256(string).hexdigest() , when the hash isn't a secret.

    Read the article

  • Switching to HTTPS - redirect question

    - by seengee
    Following the recent Google announcements about improved ranking for sites running on https we have a number of clients asking about this. Is it safe to just 301 redirect all pages to their SSL equivalent, for example in a common PHP include file: if($_SERVER['HTTPS']!="on"){ $redirect= "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; header("Location:$redirect",true,301); exit(); } Obviously I'm aware this is also possible within a .htaccess file but that cannot be modified in our case. Obviously all internal links would be switched to https:// links but obviously we need to sort out incoming links from Google and elsewhere. Is this a sound approach? Are there any other gotchas to be aware of?

    Read the article

  • Google Sites page never shows up in Google Search organic results?

    - by gus
    I use Google Sites (i.e.: https://sites.google.com/site/EXAMPLE/ ) as a convenient way to maintain up-to-date info on several residential properties, info that's often requested by my property agents, its been around for about 1 year, but I still can never get it to appear in organic Google search results or Bing, even if I search the specific keywords such as the street names. I submitted the URL manually to search engines, knowing that my Sites page probably has very few incoming links. Is this expected behavior? The content of my page has simple formatted text, and outgoing links to Picasa/G+/imgur photo albums. Am I doing something wrong or do all GoogleSites pages have poor organic search rank? Thank you very much.

    Read the article

  • WebFX: Running JavaFX as web page

    - by Bruno.Borges
    This weekend I wanted to learn JavaFX, so I decided to code an idea I had a few years ago when I first saw JavaFX Script. So I started coding a web browser that runs HTML with the awesome, HTML5 supported WebView. But this browser also offers one extra feature: it loads FXML files as if they were HTML. So instead of defining your web page with HTML and running with WebKit, you can define a web page with FXML+CSS+JS and run as a JavaFX application. The project is called WebFX and already has a prototype on GitHub. I also uploaded a video on YouTube demonstrating the idea. What do you think about using JavaFX in the future for web pages, instead of HTML?

    Read the article

  • Firefox va intégrer Weave en natif pour proposer la synchronisation cryptée et anonyme par défaut :

    Mise à jour du 25/05/10 Firefox va intégrer Weave en natif Pour proposer la synchronisation cryptée et anonyme par défaut Weave est le premier « service Mozilla ». Il s'agit en fait d'une extension qui permet de synchroniser les données de Firefox ? comme les marques-pages, les mots de passe, etc (lire ci-avant) - entre plusieurs machines. La Fondation Mozilla vient de décider d'inclure ce service en natif dans son navigateur. Rien de nouveau sous le soleil par rapport à la concurrence ? Oui et non. Non, parce qu'il est vrai que certaines des concurrents de Firefox le proposent déjà. ...

    Read the article

  • Social media exchange strategies

    - by Wladimir Ivanov
    Recently I've stumbled upon some [B]facebook/twitter/g+[/B] and other social site [B]tools[/B] which offer [B]like for like[/B]. As I know from personal experimenting following certain people/pages on twitter also gives you followers. What's your opinion on this type of social media exchange (I know the fans/followers you get are only number which couldn't help much with growing your site)? Which of these sites are proven to boost some statistics? Are there other better exchange tactics? Thanks in advance.

    Read the article

  • When JDeveloper IDE doesn't render the visual editor

    - by Frank Nimphius
    Though with Oracle JDeveloper 11g the problem of the IDE not rendering JSF pages properly in the visual editor has become rare, there always is a way for the creative to break IDE functionality. A possible reason for the visual editor in JDeveloper to break is a failed dependency reference, which often is in a custom JSF PhaseListener configured in the faces-config.xml file. To avoid this from happening, surround the code in your PhaseListener class with the following statement (for example in the afterPhase method) public void afterPhase(PhaseEvent phaseEvent) {   if(!ADFContext.getCurrent().isDesigntime()){ ... listener code here ... } } The reason why the visual editor in Oracle JDeveloper fails rendering the WYSIWYG view has to do with how the live preview is created. To produce the visual display of a view, JDeveloper actually runs the ADF Faces view in JSF, which then also invokes defined PhaseListeners. With the code above, you check whether the PhaseListener code is executed at runtime or design time.If it is executed in design time, you ignore all calls to external resources that are not available at design time.

    Read the article

  • Broken Links in different browsers

    - by kdorival
    Hi I'm having problems with our website, http://www.accessiblehomehealthcare.com, which is a wordpress 2.7 (version). All of a sudden our RSS links broke on the right side, which has happened before and I fixed it within 5 mins. Now, when I fix it, it doesn't look right in different version of I.E. or Firefox, I have I.E. 8 and Firefox 3.6.15 and it looks good for the most part, but there are a few parts where the links are broken. One browser the links would look ok but go to another page and the links or logos would be broken. Certain parts of the website should be static(identical) to the other pages of the site, but if a link is broken on one page, its perfect on another page. I was wondering was there a secret code for wordpress to keep the sites compatible with all browser versions or is there a bigger issue???? Any help or suggestions will help???

    Read the article

  • Multiple TOC with MediaWiki using section headings in single page

    - by user1704043
    I'm running my own installation of MediaWiki, which has been great! I haven't been able to find the answer to this small problem in any post, how to, etc. Here's the setup: Article TOC (limited to showing only H1 and H2) ==H1== ===H2=== ====H3==== ====H3==== I don't want the H3 to show up on the main table of contents, because it would make the list very long. Instead, under the H2, I would like to display another TOC with all the H3's under that listing. From my understanding, you cannot have multiple table of contents on a single page. I've thought about making a template for each H2 that has the H3 links, but that seems like it duplicates a lot of work and creates loads of pages. I'd love a template that sucks all subsection names and spits them out, but I don't see how to do that. Alternatively, is there a way to enable multiple TOCs in a custom install of MediaWiki that I'm missing? Even that would get closer to what I'm trying to do.

    Read the article

  • Will my site containing duplicate content be accepted in Adsense

    - by user5858
    I've a new site just over 6 months with 50 unique visitors daily. It has good amount of duplicate pages which are not copyrighted. For example I've copied related companies product FAQ's "as is" in the site. Moreover I'm not supposed to modify a company's product's faqs. I fear my login may be banned by Adsense if I submit it. So I want to know: 1) Whether I can submit it for Adsense account 2) Whether Google can penalize me and in what way 3) How would Google come to know that the duplicate content on my site is not copyrighted?

    Read the article

  • href="x-default" for english version which isn't an auto-redirecting homepage or country selector?

    - by Noam
    for each url on my site, I'm auto-redirecting according to header accept language. The site arch is english version: http://mydomain.com/page spanish version http://es.mydomaina.com/page etc.. The english version is displayed unless I'm seeing a specific language other than en and that I support in the header, and then a redirect occurs. Google says this: For language/country selectors or auto-redirecting homepages, you should add an annotation for the hreflang value "x-default" as well: My pages aren't language selectors, nor are they the homepage. But I am auto-redirecting. My question is, should my english version be hreflang="x-default" or/and hrefland="en"?

    Read the article

  • 301 Redirects for regional variants of a homepage

    - by Adam Jenkin
    I am planning on implementing a website which has regional homepage variants. For Example: mycompany.com/europe mycompany.com/us The rest of the site is region agnostic and content will continue such as: mycompany.com/news mycompany.com/about-us etc For homepage (.com) requests, I plan on redirecting users to the correct homepage variant (via 301). If I cannot determine the correct one, I will fallback to redirecting them to the US homepage (/us). From an SEO point of view, firstly is this ok? or should I be doing anything additional to this for making search engines aware of the regional differences? As crawlers are region agnostic, I plan on directing them to the US page with a 301, or should I have something on the .com page which they use? Being that the regional homepage's will likely be the most visited pages, they should show up in result sitelinks when searching for mycompany (which I think is a good thing). Apologies for the slightly open question - I know anything SEO related is more opinion/best practice than fact but am purely looking for advice.

    Read the article

  • How do I analyze vague Google Analytics data re traffic from Facebook?

    - by user6982
    We have one Facebook fan page and two personal profiles that could be sending traffic and then there are the many facebook pages of friends etc. I am also running an ad campaign from my FB account for my husband's business which has a link from his personal FB profile and his fan page. On Google analytics for his business we get the following referring sites from Facebook: /ajax/emu/end.php which is listed under facebook.com / referral /l.php (which is a not-found page at FB /ajax/emu/end.php which is listed under apps.facebook.com Both of the working links send me to the home page of my profile, which is the account I am working from to create and review the FB ad campaign that we are running. Is this info telling me any useful information at all? Is there a best practice for tracking and analyzing Facebook traffic that is a lot more granular? thanks!

    Read the article

  • Matching my skills with Java and Web Programming

    - by John R
    here is my main question: What is the most common way that Java is used in web development? The reason I ask: I am currently in the process of finding my first internship. Every employer has a separate set of languages, technologies and acronyms they want their candidates to know. In school I did well with Java. As a hobby and interest I have developed a handful of web pages widgets, scripts, etc. My university emphasized Java, C and theory. My hobbies emphasize HTML, PHP, JavaScript, CSS, and a little jQuery, etc. I can't learn a dozen different technologies to satisfy most prospective employers (in what is left of the summer). I think my best bet is combine my skills with Java and my interests in web development. That brings me back to my original question: What is the most common way that Java is used in web development?

    Read the article

  • What is the impact of a CMS on page load time versus a static site?

    - by PleaseStand
    I am creating a 20-page site that will go on shared hosting. Each page will be about 20 KB (including HTML, CSS, and images common to all pages). To avoid manually adding navigation elements to each page, I am considering using a CMS. However, I am concerned that on a busy server, using a CMS would make the site load more slowly. In a shared hosting environment where PHP is run as a CGI binary, how much does a CMS (WordPress, Drupal, etc.) generally affect page load time, compared to both "plain HTML" static sites and those using PHP as merely a templating language?

    Read the article

  • I have a large number of links on every page, for design reasons I want to keep it but is it hurting my SEO

    - by Callum Rexter
    The site is http://www.centralsaddlery.co.uk We have other issues which we are tackling in terms of content etc but the question I have is: "Is my main navigation hurting us in SEO?" Its a lot of links and it's on a lot of pages. If so - what is a way to get google to ignore links below the top level. I had thought google would see that the links are hidden by default and only shown on hover but I can't verify this at all. We absolutely want to keep the menu, our customers like it and so do we - we think it is pretty usable as we have a lot of products to look at. Any advice is appreciated (and any tips for any part of the SEO are welcome too)

    Read the article

  • Optimization of a Hybrid Pagination Scheme

    - by Kaustubh Karkare
    I'm working on a Web Application using node.js in which I'm building a partial copy of the database on the client-side to decrease the load on my server. Right now, I have a function like this (expressed as python-style pseudocode, but implemented in JavaScript): get(table_name,primary_key): if primary_key in cache[table_name]: return cache[table_name][primary_key] else: x = get_data_from_server(table_name,primary_key) # socket.io return cache[table_name][primary_key] = x While this scheme works perfectly well for caching individual rows, I'd like to extend it to support the creation of paginated tables ordered according to the primary_key, and loading additional data using the above function for only the current and possibly the adjacent pages. Now, I don't want to keep the list of primary keys on the server to be retrieved every time I need to change the page (which, for reasons beyond the scope here, will be very frequent), and keeping it on the client side, subject to real-time create/delete events from the server, doesn't seem that good an idea, even after compression (using ranges, instead of individual values). What is the best way to calculate which items are to be displayed on a random page, minimizing the space requirements & the need for communication with the server?

    Read the article

  • Is it a bad practice to include all the enums in one file and use it in multiple classes?

    - by Bugster
    I'm an aspiring game developer, I work on occasional indie games, and for a while I've been doing something which seemed like a bad practice at first, but I really want to get an answer from some experienced programmers here. Let's say I have a file called enumList.h where I declare all the enums I want to use in my game: // enumList.h enum materials_t { WOOD, STONE, ETC }; enum entity_t { PLAYER, MONSTER }; enum map_t { 2D, 3D }; // and so on. // Tile.h #include "enumList.h" #include <vector> class tile { // stuff }; The main idea is that I declare all enums in the game in 1 file, and then import that file when I need to use a certain enum from it, rather than declaring it in the file where I need to use it. I do this because it makes things clean, I can access every enum in 1 place rather than having pages openned solely for accessing one enum. Is this a bad practice and can it affect performance in any way?

    Read the article

  • Redirect error in Google Webmaster Tools report

    - by Aurelio De Rosa
    I built a CMS and I used it to create the following website http://www.tkdmontecatini.com . After some days, Google Webmaster Tools started to give me several "Redirect error" on some pages like the follows: http://www.tkdmontecatini.com/it/photogallery http://www.tkdmontecatini.com/it/pagina/9/Informazioni/Corsi/Chi-Siamo http://www.tkdmontecatini.com/it/pagina/2/Informazioni/Eventi/Eventi The funny things are: If I access those links from a browser, it's all right and I've not redirect loops or other similar issues If I use the "Fetch as Googlebot" function, I get a great "Success" result Question: Any idea of why this happens and how can I fix it?

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >