Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 173/399 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Switching to HTTPS - redirect question

    - by seengee
    Following the recent Google announcements about improved ranking for sites running on https we have a number of clients asking about this. Is it safe to just 301 redirect all pages to their SSL equivalent, for example in a common PHP include file: if($_SERVER['HTTPS']!="on"){ $redirect= "https://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; header("Location:$redirect",true,301); exit(); } Obviously I'm aware this is also possible within a .htaccess file but that cannot be modified in our case. Obviously all internal links would be switched to https:// links but obviously we need to sort out incoming links from Google and elsewhere. Is this a sound approach? Are there any other gotchas to be aware of?

    Read the article

  • Where have the Direct3D 11 tutorials on MSDN have gone?

    - by Cam Jackson
    I've had this tutorial bookmarked for ages. I've just decided to give DX11 a real go, so I've gone through that tutorial, but I can't find where the next one in the series is! There are no links from that page to either the next in the series, or back up to the table of contents that lists all of the tutorials. These are just companion tutorials to the samples that come with the SDK, but I find them very helpful. Searching MSDN from google and the MSDN Bing search box has turned up nothing, it's like they've removed all links to these tutorials, but the pages are still there if you have the URLs. Unfortunately, MSDN URLs are akin to youtube URLs, so I can't just guess the URL of the next tutorial. Anyone have any idea what happened to these tutorials, or how I can find the others?

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • Optimization of a Hybrid Pagination Scheme

    - by Kaustubh Karkare
    I'm working on a Web Application using node.js in which I'm building a partial copy of the database on the client-side to decrease the load on my server. Right now, I have a function like this (expressed as python-style pseudocode, but implemented in JavaScript): get(table_name,primary_key): if primary_key in cache[table_name]: return cache[table_name][primary_key] else: x = get_data_from_server(table_name,primary_key) # socket.io return cache[table_name][primary_key] = x While this scheme works perfectly well for caching individual rows, I'd like to extend it to support the creation of paginated tables ordered according to the primary_key, and loading additional data using the above function for only the current and possibly the adjacent pages. Now, I don't want to keep the list of primary keys on the server to be retrieved every time I need to change the page (which, for reasons beyond the scope here, will be very frequent), and keeping it on the client side, subject to real-time create/delete events from the server, doesn't seem that good an idea, even after compression (using ranges, instead of individual values). What is the best way to calculate which items are to be displayed on a random page, minimizing the space requirements & the need for communication with the server?

    Read the article

  • Le SEO en 10 minutes, Google donne ses recettes pour optimiser le référencement d'un site Web

    Le SEO en 10 minutes Google donne ses recettes pour optimiser le référencement d'un site Web Google vient de publier une vidéo de 10 minutes qui explique les bases du SEO (search engine optimization) pour les startups. L'optimisation d'un site Web pour les moteurs de recherche tout en respectant les recommandations de Google peut être un véritable défi pour les entreprises. Maile Ohye, Google Developer Advocate, donne en 10 minutes, les recettes pour optimiser un petit site de moins de 100 pages dans la vidéo « SEO for startups in under 10 minutes ». Les concepts clés de l'optimisation du référencement de son site y sont évoqués comme la redirection, la str...

    Read the article

  • How mature is FreeBASIC?

    - by David
    A friend of mine is considering using FreeBASIC in a critical production environment. They currently use GWBasic, and they want to make a soft transition towards more modern languages. I am just worried that there might be undetected bugs in the software. I see that their version number is 0.22.0, which indicates, that it is not quite mature yet. I also read this discussion, without being able to conclude. Also on their Sourceforge pages there is no indication of whether it is Alpha or Beta (which anyways is not a very good indicator). Does anyone have own experience about the maturity, ideas on how to judge the maturity, or know of companies using FreeBASIC in a critical production environment?

    Read the article

  • Matching my skills with Java and Web Programming

    - by John R
    here is my main question: What is the most common way that Java is used in web development? The reason I ask: I am currently in the process of finding my first internship. Every employer has a separate set of languages, technologies and acronyms they want their candidates to know. In school I did well with Java. As a hobby and interest I have developed a handful of web pages widgets, scripts, etc. My university emphasized Java, C and theory. My hobbies emphasize HTML, PHP, JavaScript, CSS, and a little jQuery, etc. I can't learn a dozen different technologies to satisfy most prospective employers (in what is left of the summer). I think my best bet is combine my skills with Java and my interests in web development. That brings me back to my original question: What is the most common way that Java is used in web development?

    Read the article

  • Will my site containing duplicate content be accepted in Adsense

    - by user5858
    I've a new site just over 6 months with 50 unique visitors daily. It has good amount of duplicate pages which are not copyrighted. For example I've copied related companies product FAQ's "as is" in the site. Moreover I'm not supposed to modify a company's product's faqs. I fear my login may be banned by Adsense if I submit it. So I want to know: 1) Whether I can submit it for Adsense account 2) Whether Google can penalize me and in what way 3) How would Google come to know that the duplicate content on my site is not copyrighted?

    Read the article

  • Google Sites page never shows up in Google Search organic results?

    - by gus
    I use Google Sites (i.e.: https://sites.google.com/site/EXAMPLE/ ) as a convenient way to maintain up-to-date info on several residential properties, info that's often requested by my property agents, its been around for about 1 year, but I still can never get it to appear in organic Google search results or Bing, even if I search the specific keywords such as the street names. I submitted the URL manually to search engines, knowing that my Sites page probably has very few incoming links. Is this expected behavior? The content of my page has simple formatted text, and outgoing links to Picasa/G+/imgur photo albums. Am I doing something wrong or do all GoogleSites pages have poor organic search rank? Thank you very much.

    Read the article

  • How can I clone or mirror a site without SEO penalties for duplicate content?

    - by Amanda
    I am a web developer and I want to create clones of the sites I've developed for clients, so that I have an "original copy" on a subdomain of my own website, so that I can showcase my work to new clients. What is the best way to not get my clients original websites penalised for duplicate content? I am planning to have a robots.txt file that disallows all robots, as well as using <link href="http://www.client-canonical-site.com/" rel="canonical" /> in the <head> of the pages. Is that sufficient? Should I use rel=nofollow on all the links as well?

    Read the article

  • problem connecting to wifi at long range

    - by user171849
    I am using a compaq 8510p with internal wifi. The campground supplies an open wifi hotspot, to which I can connect at close range (30 ft) but not at longer range (300 ft). Connecting a usb dongle just confuses things. The dongle tries to lock on to all the wifi networks in my vicinity. They are password protected, but my laptop still tries to connect. I connected a cantenna via usb port and got a signal which said I was connected, but all web pages returned error 'unable to connect to server', despite having three bars showing on the wifi icon. I believe it because the installed wifi card interacts with the usb dongle. If so what can I do about it? I am using Ubuntu 12.04.

    Read the article

  • Tab navigation and double content

    - by Guisasso
    I have a website in which i use tabs to navigate between pages. For example, page a displays A as an active tab and B and C background tabs. If the visitor gets to the website via page B, i also would like to display to page d, but not a and c. Question: I know i can just create index2 for b for example, so when the visitor gets to b from a, i display a,b,c and index1 when visitor gets to b from d for example. Is that a bad practice? I know double content isn't good, but in which other way can i or should i approach this problem? The tab navigation i designed uses < li and id tag do display active tab, defined in the < body tag.

    Read the article

  • I have a large number of links on every page, for design reasons I want to keep it but is it hurting my SEO

    - by Callum Rexter
    The site is http://www.centralsaddlery.co.uk We have other issues which we are tackling in terms of content etc but the question I have is: "Is my main navigation hurting us in SEO?" Its a lot of links and it's on a lot of pages. If so - what is a way to get google to ignore links below the top level. I had thought google would see that the links are hidden by default and only shown on hover but I can't verify this at all. We absolutely want to keep the menu, our customers like it and so do we - we think it is pretty usable as we have a lot of products to look at. Any advice is appreciated (and any tips for any part of the SEO are welcome too)

    Read the article

  • 301 Redirects for regional variants of a homepage

    - by Adam Jenkin
    I am planning on implementing a website which has regional homepage variants. For Example: mycompany.com/europe mycompany.com/us The rest of the site is region agnostic and content will continue such as: mycompany.com/news mycompany.com/about-us etc For homepage (.com) requests, I plan on redirecting users to the correct homepage variant (via 301). If I cannot determine the correct one, I will fallback to redirecting them to the US homepage (/us). From an SEO point of view, firstly is this ok? or should I be doing anything additional to this for making search engines aware of the regional differences? As crawlers are region agnostic, I plan on directing them to the US page with a 301, or should I have something on the .com page which they use? Being that the regional homepage's will likely be the most visited pages, they should show up in result sitelinks when searching for mycompany (which I think is a good thing). Apologies for the slightly open question - I know anything SEO related is more opinion/best practice than fact but am purely looking for advice.

    Read the article

  • href="x-default" for english version which isn't an auto-redirecting homepage or country selector?

    - by Noam
    for each url on my site, I'm auto-redirecting according to header accept language. The site arch is english version: http://mydomain.com/page spanish version http://es.mydomaina.com/page etc.. The english version is displayed unless I'm seeing a specific language other than en and that I support in the header, and then a redirect occurs. Google says this: For language/country selectors or auto-redirecting homepages, you should add an annotation for the hreflang value "x-default" as well: My pages aren't language selectors, nor are they the homepage. But I am auto-redirecting. My question is, should my english version be hreflang="x-default" or/and hrefland="en"?

    Read the article

  • Chromium and Google Chrome downloading

    - by user286166
    Well, I have sincerely enjoyed having Google Chrome on my linux machine, and have never had any problems until recently. I found that Google Chrome will simply not allow redirected downloads to start. I can download from direct links, but not pages that redirect and "Start in 3 seconds..." I immediately assumed it was the webpage itself, so I refreshed many times. I then restarted the browser, and after another failed attempt, my computer. After that point, I suspected my internet provider was to blame. I tried the redirection link in an alternative browser (Midori), and it worked perfectly fine. I decided it must be the version of Chrome that Google put out, so I quickly installed Chromium, and to my dismay, ran across the same problem. I can live with copying and pasting the url into Midori for redirected links, but I'd like the convenience of staying in my main browser. Thank you for any advice in advance. c:

    Read the article

  • How do I reuse a state machine in a slightly different way?

    - by JoJo
    Problem I have a big state machine. The design requirements of the project have changed such that I need to re-use this state machine in another place. All the states remain the same in this new place, but a few states run slightly different stuff. What design pattern allows me to reuse this state machine? Motivation I am building a video player. It is modeled by a state machine with these states: stopped, loading, playing, paused, crashed, and some more... This video player needs to be used on two web pages. When the player crashes on the first page, it should show an error message below. If the player crashes on the second page, the error message should appear in the center of the video and pulsate a few times.

    Read the article

  • Redirect error in Google Webmaster Tools report

    - by Aurelio De Rosa
    I built a CMS and I used it to create the following website http://www.tkdmontecatini.com . After some days, Google Webmaster Tools started to give me several "Redirect error" on some pages like the follows: http://www.tkdmontecatini.com/it/photogallery http://www.tkdmontecatini.com/it/pagina/9/Informazioni/Corsi/Chi-Siamo http://www.tkdmontecatini.com/it/pagina/2/Informazioni/Eventi/Eventi The funny things are: If I access those links from a browser, it's all right and I've not redirect loops or other similar issues If I use the "Fetch as Googlebot" function, I get a great "Success" result Question: Any idea of why this happens and how can I fix it?

    Read the article

  • Clean SOAP Calls from iOS - SudzC

    - by Richard Jones
    This is worth another mention. If you need to call SOAP web-services from iOS or Javascript, and lets face who doesn't. http://SudzC.com really delivers. You give it the URL to you're WSDL file (or upload a file) and it just spits out a ready to go Xcode project. I would point out that to get it to work 100% I changed line 204, in Soap.m (commented out line is old version, mine is below) //if([child respondsToSelector:@selector(name)] && [[child name] isEqual: name]) { if([child respondsToSelector:@selector(name)] && [[child name] hasSuffix: name]) { I consumed a Microsoft Dynamics NAV set of web-service pages no problem (and they tend to be fairly complex WSDL definitions).

    Read the article

  • LibreOffice prints everything half size

    - by oldmankit
    Ever since I printed a document from Evince in landscape orientation, LibreOffice Calc / Writer refuses to print anything correctly. It is scaling everything by 0.5x and rotating it. To get the same effect, go to print - page layout tab - pages per sheet = 2. However, I don't want 2. I want 1. I have wizzed through all of the printing options a few times but can't get to the bottom of this. I've restarted my printer and computer several times. I have done sudo apt-get purge libreoffice* and reinstalled again. Still it will not print full size. (Of course I have checked that the page settings are set to A4 and portrait.) Update: this problem is specific to one printer only, to printing from within LibreOffice (Calc, Writer). With other printers / programs I have no problems.

    Read the article

  • Quels points prendre en compte avant de se lancer vers la mobilité ? Le CMS ASP.NET DotNetNuke répond dans un livre blanc

    Découvrez les six points à prendre en compte avant de se lancer vers la mobilité selon DotNetNuke, dans un livre blanc publié par le CMS ASP.NET Le secteur du mobile a le vent en poupe. D'ici 2014, le nombre de pages affichées par les personnes utilisant les terminaux mobiles sera plus élevé que le nombre de visites effectuées depuis les ordinateurs de bureau et portables, selon une estimation de Morgan Stanley Research. Les entreprises pour maintenir leur présence sur Internet doivent donc songer à fournir dès à présent un accès rationalisé et facile à l'information pour les clients qui utilisent les smartphones et les tablettes. Comment préparer sa présence sur le mobile...

    Read the article

  • why google ignore my links page?

    - by Yaniv
    I have a website, where im loading all the data via AJAX. since, google doesn't work with AJAX, and the ways to make it AJAX-friendly are a bit odd, i thought that by creating a links page, where it links, from server side, to all the links that im loading in ajax - will solve the problem. but unfortunately, that doesnt seem to work. google webmaster shows that even though my links page discovered, the content of it - the links - are totally ignored. I can only assume that google tend to ignore links in such pages. my question is - WHY?! and furthermore, how to overcome this. Thanks.

    Read the article

  • Clarity around Advanced Segment defintion

    - by Btibert3
    I am hoping to get some clarity around an advanced segment I created. For context, our website spans multiple domains. For reasons I wont get into, I created an advanced segment that looks for pages containing my subdomain of interest (subdomain.site.com). I want to ensure that my interpretation of this advanced segment is accurate. Simply, it flags all visits to our entire domain that viewed at least one page on my subdomain of interest? If I am off, what does this advanced segment represent? Many thanks in advance!

    Read the article

  • 20 Years of Solaris - 25 Years of SPARC!

    - by Stefan Hinker
    I don't usually duplicate what can be found elsewhere.  But this is worth an exception. 20 Years of Solaris - Guess who got all those innovation awards!25 Years of SPARC - And the future has just begun :-) Check out those pages for some links pointing to the past, and, more interesting, to the future... There are also some nice videos: 20 Years of Solaris - 25 Years of SPARC (Come to think of it - I got to be part of all but the first 4 years of Solaris.  I must be getting older...)

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >