Search Results

Search found 10402 results on 417 pages for 'macbook pro'.

Page 187/417 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • How to set up offline manifest for a web app to run in Safari in iOS?

    - by ahmd1
    I'm currently trying to set up an offline.manifest file for my web app to be used offline on an iOS device. For testing purposes I have a very simple HTML page that I'm trying to add to a home screen. I'm testing it on a live iPhone 4, but after the page is added to the home screen and I put the iPhone in the airplane mode and try to start my web app I get this error: "Turn Off Airplane Mode or Use Wi-Fi to Access Data" and then if I click OK I get: "Cannot Open Web App Name" "Web App Name could not be opened because it is not connected to the Internet" The following is added to the HTML file: <!DOCTYPE html> <html lang="en" manifest="scrts/offline.manifest"> and the offline.manifest is composed as such: CACHE MANIFEST ../pics/bkgnd_iphn_settings.png ../pics/mbl_btn_fb.png ../pics/mbl_btn_twt.png ../pics/icon_57_57_bg.png ../pics/icon_72_72_bg.png ../pics/icon_114_114_bg.png ../pics/icon_144_144_bg.png ../pics/splash_320_460_bg.png ../pics/splash_768_1004_bg.png ../pics/splash_1004_768_bg.png I got all instructions on composing it from here I also adjusted the .htaccess file to add this line: AddType text/cache-manifest .manifest Any idea what am I not doing right?

    Read the article

  • Do Not Track feature of IE10

    - by Pete Herbert Penito
    One of our clients is getting a bit worried about the new "Do Not Track" feature of Internet Explorer 10. Her site is heavily dependent on php sessions (as I imagine many other sites are). This was what she was reading: http://www.bbc.co.uk/news/technology-18288710 I need some clarification, will this affect how sessions (or cookies) work on normal web sites that use the PHP $_SESSION array? Or is it regarding only how advertising works (engadget's article seems to insinuate this)? Can anyone provide a more technical overview (and the ramifications) of PHP-powered websites?

    Read the article

  • Parse text file on click and display

    - by John R
    I am thinking of a methodology for rapid retrieval of code snippets. I imagine an HTML table with a setup like this: one two ... ten one oneTwo() oneTen() two twoOne() twoTen() ... ten tenOne() tenTwo() When a user clicks a function in this HTML table, a snippet of code is shown in another div tag or perhaps a popup window (I'm open to different solutions). I want to maintain only one PHP file named utitlities.php that contains a class called 'util'. This file & class will hold all the functions referenced in the above table (it is also used on various projects and is functional code). A key idea is that I do not want to update the HTML documentation everytime I write/update a new function in utilities.php. I should be able to click a function in the table and have PHP open the utilities file, parse out the apropriate function and display it in an HTML window. Questions: 1) I will be coding this in PHP and JavaScript but am wondering if similar scripts are available (for all or part) so I don't reinvent the wheel. 2) Quick & easy Ajax suggestions appreciated too (probably will use jquery, but am rusty). 3) Methodology for parsing out the functions from the utilities.php file (I'm not to good with regex).

    Read the article

  • Where can I safetly search domain whois without worrying about the search engine parking on the domain immediately after the search?

    - by Evan Plaice
    There are a lot of companies that provide domain whois but I've heard of a lot of people who had bad experiences where the domain was bought soon after the whois search and the price was increased dramatically. Where can I gain access to a domain whois where I don't have to worry about that happening? Update: Apparently, the official name for this practice is called Domain Front Running and some sites go as far as to create explicit policies stating that they don't do it. This is where a domain registrar or an intermediary (like a domain lookup site) mines the searches for possibly attractive domains and then either sells the data to a third-party, or goes ahead and registers the name themselves ahead of you. In one case a registrar took advantage of what's known as the "grace period" and registered every single domain users looked up through them and held on to them for 5 days before releasing them back into the pool at no cost to themselves. Source: domainwarning.com And apparently, after ICANN was notified of the practice, they wrote it off as a coincidence of random 'domain tasting'. Source: See for yourself

    Read the article

  • Getting SSL certificate for a sub-domain

    - by Hemant
    Our company owns a domain say www.mycompany.com. I understand that it is trivial to get an SSL certificate for above domain since we do have a website running on that address. We want a certificate for a subdomain say sub.mycompany.com. We intend to use this sub-domain in our organisation network only and have no plans to publish a public website on this subdomain. So the question is "Is it necessary to have a DNS entry for subdomain, resolving to our IP address and host some page on that address?" I hope proving that main domain is in our control, we can get an SSL certificate for sub domain also. Is it possible?

    Read the article

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • How to fix this navigation issue in my site?

    - by David
    First off I use webs.com for the creation of my site. I have a very basic layout. List of links of the left and content on the right with a heading up top. Now in my list of links every link is an article that I wrote, I have about 25 links going down the left hand side of my site. Problem is when I try out new themes that support horizontal navigation as opposed to vertical navigation I get either a messy overflow of links Or a link called "more" which lists the rest of the articles in a drop down-list across my site. What I wish I had was a simple horizontal navigation like" "home, about, articles" and when the user clicks on articles it would then bring them to a page containing all my articles there. I would prefer it to be in a table like display. That way is not a long list. Anyways any ideas on how I can fix this issue im having? Please let me know if you need more information.

    Read the article

  • Question aboud Headings For Professionals <H1>... <H9> in SEO & Browsercompatibility Differences

    - by Sam
    We all know the importance ans significance of Headings for Professional Webmasters. These were known for professional developers as <h1>Heading 1</h1> h2 ... h6. As a daring webdeveloper I lately needed more short headings for complex structured document and i thought what the hell and went ahead and used in css h1,h2,h3,h4,h5,h6{ } h7{ } h8{ } h9{ } My experiment turned out to pay back. But only in Firefox, Safari, Chrome etc, not in Internet Explorer 8. Q1. Who(&When) decided that All headings should go upto h6, and not h4 or h7? Q2. Why h7 -h9 work perfect in all major browsers, except IE8? Q3. What is the significance for Bing,Yahoo and Googld in terms of recognition or headings h1 ~ h9? obviously h1 is more important than 2, but do they differentiate between h5 and h6? or not anymore after h3?

    Read the article

  • Is there a way I can filter traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools?

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that I can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Google Tag Manager can help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york

    Read the article

  • Using paypal to process credit cards in Sweden through an API [on hold]

    - by Mastikator
    I'm looking for a Paypal API that lets me process credit cards to make payments without being redirected to a paypal site and without enforcing consumers to use their paypal account. And it needs to work in Sweden. The ones I've looked at (dodirectpayment, expresscheckout, paypalpro gateway) and none of them have let me process credit cards in Sweden via an API that doesn't force the user to visit the paypal login site. I have a form on my webpage that the user types their credit card number, ccv2, expiration, name, address, etc. I need an API that works in Sweden that simply processes the request, and it has to be without the step of being redirected into a paypal website. The ones that I have found only worked in a select few countries, is there an international solution? I've already spent over 12 work hours just looking for an API that meets my requirements.

    Read the article

  • How to make Google recognize language for a multilingual website?

    - by Julien Fouilhé
    Few weeks ago, I implemented translation functionality for the website of my company. The website is now available in french and english and I did look on the internet the best way to do if we want to do not lose any ranking and to have our pages on Google. Here is what I did: I did set a response header: Content-Language:en and Content-Language:fr My URLs are formatted as: http://www.website.com/en/... and http://www.website.com/fr/... My html tag is set with a lang attribute: <html lang="en"> and <html lang="fr"> There is a <link rel="alternate" hreflang="en" href="EnglishPageUrl"> on french pages and a <link rel="alternate" hreflang="en" href="frenchPageUrl"> on english pages. But Google keeps referring to some english pages when I'm doing a search on french engine, knowing that the website was first only available in english. Is that normal? Do I have to wait still, it has been now almost one month, I thought it would be okay...? Thank you.

    Read the article

  • Googlebot can't access my site when crawling from rootdomain

    - by PéCé
    I can't explain why I get this message for my rootdomain result in Google : trocmalin.com/ A description for this result is not available because of this site's robots.txt – learn more. Here is my site specifics : vide-greniers.trocmalin.com is the site address www.trocmalin.com redirects (301) to vide-greniers.trocmalin.com trocmalin.com redirects (301) to vide-greniers.trocmalin.com too... User-agent: * Disallow: /orga/ User-agent: * Disallow: /sitemap-update Google results for vide-greniers.trocmalin.com are well rendered, as well as sub pages allowed for bots. But the result for my rootdomain (trocmalin.com) gives this message... Can you help me ?

    Read the article

  • Password protect an alias virtual difrecory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Should I set NOINDEX header for my JS, CSS and image files?

    - by Yoga
    Are there any harms if my site send NOINDEX headers for all my static assets? For image files, I refer to those valueless, e.g. background images, button images, etc. Update: more background information I have this concern is since recent Google said they also execute JS and they might fetch content via Ajax. So, for example, if I send noindex for my jQuery script, so Google would not be able to use them to load Ajax, I suppose it is not good for my site's SEO, right?

    Read the article

  • SEO tool is telling me title, description and keywords don't exist, but they do. Where is the problem?

    - by DaveDev
    I'm using the following tool to analyse how 'optimal' a site that I'm working on is for search engines: http://tools.seobook.com/general/spider-test/ I enter the URL for the site - http://ftmsuat.moneymate.com - into the search bar, and it returns a breakdown of the contents of the page. I'm a little confused by what I see though. According to the results, the page doesn't have a title, description or keywords. But if you check the source of the page, those elements are definitely there. So I'm wondering now, which is wrong? seobook.com or my page?

    Read the article

  • Finding/hiring help with server administration

    - by letseatfood
    I need to install the fileinfo extension for PHP on my server. My hosting service does not support help with this. I know I could learn it and do it myself, but since I am an independent contractor and I know that server administration is the weakest part of my abilities, a DIY approach is going to cost me a lot more time than I can afford. What is the best way to go about hiring a trustworthy contractor to either install the fileinfo extension for me or to train me in how to do this. Thank-you.

    Read the article

  • SEO strategy for h1, h2, h3 tags for list of items

    - by Theo G
    On a page on my website page I have a list of ALL the products on my site. This list is growing rapidly and I am wondering how to manage it from an SEO point of view. I am shortly adding a title to this section and giving it an H1 tag. Currently the name of each product in this list is not h1,2,3,4 its just styled text. However I was looking to make these h2,3,4. Questions: Is the use of h2,3,4 on these list items bad form as they should be used for content rather than all links? I am thinking of limiting this main list to only 8 items and using h2 tags for each name. Do you this this will have a negative or possible affect over all. I may create a piece of script which counts the first 8 items on the list. These 8 will get the h2, and any after that will get h3 (all styled the same). If I do add h tags should I put just on the name of the product or the outside of the a tag, therefore collecting all info. Has anyone been in a similar situation as this, and if so did they really see any significant difference?

    Read the article

  • Should I use nodindex, follow or rel canonical?

    - by webmasters
    I have a site that lists offers, promotions from other websites. Since the offers expire rather quickly I don't save them into my database. I see no point in having a page from 2010 about 30% discount on a certain brand of shoes which isn't availabe anymore. A visitor enters my website; He clicks on the "shoes" category; http://www.mysite.com/shoes/ Here he sees 20 available promotions from different online stores. He clicks on a promotion and gets to a page like this: http://www.mysite.com/shoes/promotions/prada Questions: I use the template promotions.php and list all the promotions. /promotions/prada/ /promotions/otherbrand/ .... What I do is use "noindex, follow" for the links. Is that a good idea? Or should I use rel="canonical" for the promotion page? How do you advise me to handle this from the SEO point of view?

    Read the article

  • Multiple TOC with MediaWiki using section headings in single page

    - by user1704043
    I'm running my own installation of MediaWiki, which has been great! I haven't been able to find the answer to this small problem in any post, how to, etc. Here's the setup: Article TOC (limited to showing only H1 and H2) ==H1== ===H2=== ====H3==== ====H3==== I don't want the H3 to show up on the main table of contents, because it would make the list very long. Instead, under the H2, I would like to display another TOC with all the H3's under that listing. From my understanding, you cannot have multiple table of contents on a single page. I've thought about making a template for each H2 that has the H3 links, but that seems like it duplicates a lot of work and creates loads of pages. I'd love a template that sucks all subsection names and spits them out, but I don't see how to do that. Alternatively, is there a way to enable multiple TOCs in a custom install of MediaWiki that I'm missing? Even that would get closer to what I'm trying to do.

    Read the article

  • Other link relationships and impact to the SEO

    - by haha
    Example other link relationships <head> <link rel='index' title='Main Title' href='http://domain.com/' /> <link rel='start' title='Part Three' href='http://domain.com/part-3/' /> <link rel='prev' title='Part Two' href='http://domain.com/part-2/' /> <link rel='next' title='Part Four' href='http://domain.com/part-4/' /> </head> Questions Have a big impact to make my site get a nice rank to Search Engine?

    Read the article

  • SEO for images: can I use a different (cookieless) domain?

    - by Oliver
    Hello, We want to increase the value of some of our important images by means of SEO, and we want to start serving them from a different, i.e. cookieless, domain. We want to go from http://www.example.com/images/1234.jpg to http://www.example.com/germany/bavaria/landscape.jpg which can easily be done via URL rewriting. Then on the other hand, we would like to serve the image from a completely different domain, let's say http://www.examplestatic.com/germany/bavaria/landscape.jpg, to save the overhead of sending the cookie from www.example.com. Somehow I feel that this is not a good idea because I move the image away from the content by putting it on a different domain. Can anyone shed some light on this problem? Naturally, I would just use a different subdomain, e.g. img.example.com, but we already use subdomains for languages and our cookies are valid for all subdomains of example.com, so this won't help. I'd really appreciate any hints. Cheers,

    Read the article

  • Ideas to tackle unwanted bad press/review on Google's SERP?

    - by Rob
    After Googling our company name to our horror we've found someone on Yelp.co.uk has reviewed our company. On the SERP your eye is immediately drawn to the 2 star review some complete stranger has written, which to be honest is pure slander! The most infuriating thing is the person who reviewed our company has never even been a client/customer. It's a bit like me reviewing a restaurant having never eaten or even been in there! We've sent her a private message on Yelp to remove the review and also sent a complaint to Yelp themselves but have yet to get a reply. We've resisted going mad at the reviewer and also requested that she re-review us having just relaunched our new website (it still riles us that she's not even a client though!). We've had genuine customers/clients review us on Yelp yet this 2 star review remains on Google's SERP. Roughly how long would it take to for our new reviews to over take this review? Does anyone have any suggestions as to how we can push the review off the 1st page of Google's SERP or any creative ways in which we can tackle this issue?

    Read the article

  • Google Webmaster Tools Index dropped to Zero [closed]

    - by Brian Anderson
    Earlier this year I rebuilt my website using ZenCart. Immediately I saw a drop in index status from 59 to 0. I then signed up for Google Webmaster Tools and noticed the Index status took a dramatic drop and has never recovered. I have worked to add content and I know I am not done, but have not seen any recovery of this index since. What confuses me is when I look at the sitemap status under Optimization it shows me there are 1239 submitted and 1127 pages indexed. Most of my pages have fallen off page one for relevant search terms and some are as far back as page 7 or 8 where they used to be on the first page. I have made some changes in the past week to robots.txt and sitemap.xml, but have not seen any improvements. Can anyone tell me what might be going on here? My website is andersonpens.net. Thanks! Brian

    Read the article

  • Will this sitemap get me de indexed from Google?

    - by heavy rocker dude
    My site's URL (web address) is: http://quantlabs.net/private/sitemap.xml Description (including timeline of any changes made): Will this sitemap get me de-indexed from Google? My new site map just got spidered by Google for some reason. It is located at http://quantlabs.net/private/sitemap.xml, is this in danger of getting me de-indexed from Google's index. Does it look like spam even though it is not meant to be? I am trying to figure the limitation in terms of Google's threshold before they deem it a spammy sitemap. This is sitemap contains automated postings which are different with the stock symbol provided. The amount of postings within the Sitemap are quite a few in a small amount of time.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >