Search Results

Search found 14053 results on 563 pages for 'upk pro (knowledge pathwa'.

Page 112/563 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How Google Web Starter Kit serves adaptive image for mobile?

    - by 5argon
    My website weirdly (in a good way) serves smaller images when viewed on mobile. I wanted to know what cause this? As far as I know this is not the default behaviour, so I think it must be Google Web Starter Kit's doing.Here is the debug information when debugging on device. All images became 231 B size no matter how large it actually is. (On desktop debugging the size varies.) I tried using Google Web Starter Kit (https://github.com/google/web-starter-kit) recently. The tools in it are made of Ruby, Node.js, SASS and Gulp to help you 'build' website. Pre-build you can enjoy automatic reload because the Gulp script will watch all files for you. When build it will run various tools to minify HTML,CSS and compress images. According to this page https://developers.google.com/web/fundamentals/tools/build/build_site the gulp-imagemin was used. So I guess the imagemin is doing the mobile optimization for me? What kind of compression can serve automatically resized image on mobile? And why is the size 231 B? Is this related to my screen size?

    Read the article

  • Analytics Tracking and SEO

    - by Mahesh
    I'm using piwik on some of my websites and recently switched from google analytics. I find most of the stuff same on both analytics. But i always had this question in mind that what am i supposed to track other than these ? Bounce rate Referral sites Keywords Geolocation Periodic data(Month, year, week) for above factors Any other SEO factors to be considered while tracking with any analytics software ?

    Read the article

  • Is it better to have AWS EC2 and RDS is the same Availability Zone?

    - by Dan
    I run a web app in an AWS EC2 instance and the database for the app in an RDS instance both in Amazon Web Services Region East-1. However, one of them is in Availability Zone 1a and the other is in 1d. Am I getting all the speed benefits of having both instances in the same "data center" (East-1) even if they are in different Availability Zones, or can I optimize by moving them to the same Availability Zone?

    Read the article

  • Removing date from Google SERP

    - by Tom Gullen
    We are going through our site analysing the SEO. I find that sometimes on a google results page, some results show a date, others don't. Example (from same query): I prefer the result not to have the date in it, as 20 Oct 2009 probably has an adverse effect on the clickability of the result. Is this Google putting it in? Or the page itself? Or a combination of both (IE, if over a certain age, it includes date). The two URLs are: http://www.scirra.com/forum/perlin-noise-plugin_topic38498.html http://www.scirra.com/forum/dungeon-maze-generator_topic40611.html Any way to remove the dates? I'm thinking, if the age of the thread is 4 months don't display the date on the page, then Google might not find a date reference for it?

    Read the article

  • Apache Rewrites not working due to Akamai

    - by nuttyket
    I have a website which is setup with akamai. My domain and subdomains are mapped onto an Akamai IP. I have written an Apache rewrite which does an internal URL X to another internal URL Y. This rewrite works fine as long as I am testing in my local setup or for those subdomains which are not mapped onto Akamai but directly onto my Public IP. My suspicion is that while rewriting the request apache is not able to resolve the IP of the app server correctly. When I add entries to my /etc/hosts file pointing the domain/subdomain to my internal IP the rewrites work. Now, I have a huge list of subdomains and it can grow as well. Is there another way to fix this problem without having to make entries in the /etc/hosts file ? I would much appreciate your thoughts.

    Read the article

  • SSL Certificate Works in Monit - But Not in Keystore

    - by Bart Silverstrim
    I have a situation where there's a keystore file with the various root/intermediate certificates stored in it in a way that it seems to work for most browsers. Problem is that when mobile browsers hit it, there's a break in the chain and they complain. I used an SSL checker at http://www.sslshopper.com/ssl-checker.html and it states that "The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate." So...the desktop browsers must have the intermediate certs already and can make the chain connections, I'm assuming, while the mobile browsers can't. The thing is that I had used Portecle to export certificates from the keystore and cobble them together to create a .PEM certificate to run the Monit utility. When I check that application with the SSL checker, it works fine! The person that originally created the keystore said he couldn't follow the SSL provider's directions for creating the keystore because he created the CSR request using openssl, so the cert and private key had to be converted to DER format and use importkey to get it to work; following the directions he found online had importkey seem to use only a set keystore file as a result, and it would erase anything already in the file if it existed. So is there a way to take the certificate I created for Monit and create a working keystore for the Tomcat website? What would be causing the chain to be broken in the current keystore, but work for Monit? I have the SSL cert provider's intermediate and cross certificates, and the website's certificate, but is what else would I need to create a working chain of certs for a keystore?

    Read the article

  • Root Domain Redirects Incorrectly To Https instead of to WWW

    - by Ari
    TL;DR - Why do visits to my website homepage work without "www", but not to specific pages on it? I recently moved my website (Zappable.com) to a new webhost, RedHat's OpenShift (a PAAS). It requires using Cname records to setup custom domains, something my domain name registar (1&1) does not support without a hosting plan. So instead I setup Cloudflare in-between my domain and web host, and setup a Cname record on it. I then pointed a 1&1 "www" sub-domain to CloudFlare, and then pointed my 1&1 root to "www" sub-domain. This works fine for visiting to my homepage, but for some reason it does not work when visiting a specific page without "www". Instead of adding "www", it goes to HTTPS, which is strange.

    Read the article

  • is RapidSSL wildcard cert supported by major browsers?

    - by Jorre
    I'm thinking of buying a wildcard SSL cert from clickSSL : http://www.clickssl.com/rapidssl/rapidsslwildcard.aspx That would be a rapidssl certificate, and I was looking into my firefox options to see if RapidSSL is in the list of recognized Authorities. My certificate manager doesn't mention RapidSSL anywhere. Am I looking for the wrong name, e.g. is rapidssl recognized by browsers under a different name? I want to be sure that this certificate is working in all major browsers (including IE6)

    Read the article

  • Webmaster Tools, www and no-www, duplicate content and subdomains

    - by Jay
    I have not come to any conclusive answers on the following after many hours of research on many websites to the specific issue that I am trying to figure out. My company has two websites a main one at www.example.com and one at subdomain.example.com which is a subdomain of the first and is our self hosted blog. The way Google sees these with the www or no-www (called naked for now on) is that each of these actually are different when the www or naked version is used/not used in the front of the domain. I completely understand this. It is also advised that both should be set up in the Google Webmaster Tools, which I have done. Correct me if I am wrong on that in regard to having both set up. Now the way it appears is that we can set a preferred domain up in Webmaster Tools only at the root domain level. The subdomain can not have this and actually says the following "Restricted to root level domains only". So it appears that the domain should follow what the root domain says which on our preferred one says to display the www.example.com. and not the naked version. That is one issue I have in that one displays one way and the other displays another. Is it that we have the wrong redirects in place for the subdomain? Another question is does this have any affect on SEO in regards to duplicate content on the web in how we have set this up?

    Read the article

  • Make my website dynamically loaded data available to Facebook Open Graph Object Scrapper

    - by fvaliquette
    Here is the design of my web site: The user enter myWebsite.com/a/1 .htaccess rules redirect to myWebsite.com/b Now the JavaScript ExtJS library is loading. Extracting the value from the URL (in this case it is “1”) Loading ./xml/1.xml From 1.xml setting the Open Graph data (Title, type, image, etc) Loading data that will be shown to the user from 1.xml into the website. My question is: How can I make the Open Graph data available to Facebook? Facebook do not to load my ExtJS JavaScript Library before extracting the Open Graph Object values from the HTML. Is there an easy solution to this problem? The only solutions I found is to make statics web pages or dynamically pages rendered on the server side but I would like to avoid these since my web page implementation is already finished and I would like to avoid re working on it.

    Read the article

  • google custom search gives different result number for same query

    - by santiagozky
    We are using google custom search and we have found that often the totalResults iterates between two values, even for the same query. The different values can be slightly different or more than double. The parameters I am using look like this: https://www.googleapis.com/customsearch/v1? q=something cx=XXXXXXXXXX lr=lang_en siteSearch=www.mydomain.com start=1 fields=context%2Citems%28fileFormat%2CformattedUrl%2Clink%2Cpagemap%2Csnippet%2Ctitle%29%2Cqueries%2CsearchInformation%28searchTime%2CtotalResults%29%2Cspelling%2FcorrectedQuery key=YYYYYYYYYYYYYYY filter=0 This is problem because of calculating the number of result pages. How can I get the same results for the same query?

    Read the article

  • What is the best way to construct a "remove multiple items" area (ASP.NET VB) [on hold]

    - by Darkcat Studios
    Lets say for example I have a (variable length) 2 dimensional array of product names and their unique product codes. I can display this list in a datagrid, table etc. (Imagine this as a standard shopping basket type scenario) What I need to do is be able to tick multiple items (?) , then on clicking a submit button, fire an action. The bit im struggling with is how do i: A: programatically display asp:checkboxes for each item (and give them a unique ID) B: know which are ticked on firing the final action (not sure if this question is best suited to the main stack but theres so much activity on there that questions just get lost now!)

    Read the article

  • How does the Keyword Order in the domain name effects SEO?

    - by Sushil
    From the Google keyword research tool, I see "chuck norris jokes" has global 246,000 searches. And again searching "jokes chuck norris" has the same search result. But as see, order of keyword in search terms has, "hello how are you" and "how are you hello" has clearly different results. Now instead of search term (assuming "chuck norris jokes"), I was wondering, if I had to register chucknorrisjokes.com and jokeschucknorris.com, would it effect the ranking on the search result? Or would it be the same? As we see here, both the domains has the same keywords, just in different order. How would that effect? I hope what I am trying to say is clear.

    Read the article

  • Facebook page design is not working in IE8 [closed]

    - by PrateekSaluja
    Hello Experts, We have designed a face book page.It is working fine in all browser including IE7 but it is not working in IE8.We checked then we got if we run our code outside the face book page it works in IE8 but when we put our code into face book page its not working.Here is the css code what we are using for IE8. <!--[if lt IE 8]> <style> .nv_a { width:90px; height:27px; float:left; text-align:center; padding-top:8px; } .nvt_a { width:66px; height:27px; float:left; text-align:center; padding-top:8px; } .nv_a a { width:90px; height:27px; float:left; padding-top:8px; text-align:center; color:#000; display:inline-block; text-decoration:none; background-color:#e0e0e0; border-top:solid 1px #999; border-left:solid 1px #999; border-right:solid 1px #999; border-bottom:solid 1px #999; } .nv_a a:hover { width:90px; height:27px; padding-top:8px; float:left; color:#000; text-align:center; background-color:#ccc; } .nvt_a a { width:66px; height:27px; float:left; padding-top:8px; text-align:center; color:#000; display:inline-block; text-decoration:none; background-color:#e0e0e0; border-top:solid 1px #999; border-left:solid 1px #999; border-right:solid 1px #999; border-bottom:solid 1px #999; border:1px solid red; } Please help us to solve the issue.

    Read the article

  • DNS configuration to force root domain to www

    - by kolosy
    we have an app running on heroku. the dns setup is like this: A record for domain.com - heroku front end ip addresses CNAME for www.domain.com - specific host name for our app provided by heroku we also have an SSL cert for www.domain.com. the issue is that if someone goes to https://domain.com/secure_stuff, they will get heroku's SSL cert, instead of ours, causing lots of fear. We can do things on our end to make sure that all of our URLs point to https://www.domain.com, but it still won't solve this specific issue. is there a way to configure the DNS record to redirect all root domain traffic to the www subdomain?

    Read the article

  • Geotargeted subfolder questions (Portugal/Brazil and Switzerland)

    - by Lucy
    We are at the beginning of the process to get multilingual versions of a website. We will be using subfolders working off the core domain (eg mydomain.com/fr/), set the geotargeting at webmaster tools and set hreflang attribute. I would really appreciate your help with a couple of questions. 1/Portuguese: we will have a Portuguese language version of the site. Our intention is to use this to cover users in both Portugal AND Brazil. ie, we are not going to do separate folders mydomain.com/pt/ and mydomain.com/br/ Can I use 2 hreflang attributes for this language version to tell Google it covers Brazil AND Portugal? What country code to use for this subfolder? 2/Switzerland Does anyone have best practice advice how to do this? One one hand, the subfolder should be mydomain.com/ch/ but as Switzerland covers 2 language possibilities (French AND German) - what to do? thanks

    Read the article

  • Set Up Google Analytics to Track Domain Alias

    - by Brian Boatright
    I found this article from Google http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55523 However I'm not sure what happens to the data. Will I be able to determine which domain forwarded to the primary domain using their technique? Or will it simply tranfers all the relevant keyword and other factors to the primary domain but not which domain was originally landed before the 302 redirect. What I need to do is track which domain alias are being used.

    Read the article

  • How to remove HTML code from search result page content

    - by Jack Torris
    I have music website. There are 46 album pages and each page has different player and files. I just entered the one of album's URLs in a search engine. I found that Google is displaying player code in search result content. For example, enter this URL in Google and check the results. Each result displays a .mp3 file in content section. I see this: This page contains a demo of and documentation for the new jPlayer Playlist add-on, ... mp3:"http://www.jplayer.org/audio/mp3/Miaow-01-Tempered-song.mp3", ... I don't want Google to show the player code and mp3 files in search result. How can I hide audio files and player code from search engine? What would be the best solution for it?

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • When reversing a Google Analytics e-commerce transaction is the per-unit price positive or negative?

    - by Michael Glenn
    Google's own instructions for reversing an e-commerce transaction seem to contradict themselves regarding the unit price. In the instructions it states The item field has a positive per-unit price and a negative quantity. yet, the code sample has a negative per-unit price and negative quantity. _gaq.push(['_addItem', '1234', // order ID - necessary to associate item with transaction 'DD44', // SKU/code - required 'T-Shirt', // product name 'Olive Medium', // category or variation '-11.99', // unit price - required '-1' // quantity - required ]); Which is correct?

    Read the article

  • Why do some bad websites rank well?

    - by BradB
    Consider the following scenario: you are pitching SEO/Website Optimisation to a prospective client and you explain to them the importance of great copy and content, how acquiring links (ethically) can increase page rank, why the quality of the HTML build matters (H1, H2 tags, w3c validation etc), why keyword research is beneficial, you may drop in a few Google Webmaster Guideline or Matt Cutts references to back up your claims and rubbish the "back hat" approach as being no longer effective for good measure. Your advice is ethical and in the eyes of best practices, spot on. Then, the client points out to you some of their long established competitors on Google and you see these competitor websites ranking in the top spots (1 to 3) for medium to highly competitive search phrases that your client wants to compete for. These websites totally contradict your ethical approach and pretty much violate every best practice previously noted. They even out perform other "white hat" competitors who are in accordance with the above guidelines. I experienced this today. One of these well ranking websites had: About six microsites with more or less the same copy and a slightly varied layout Little or not textual content I would almost say duplicate content across the sites, but there was so little of it it could barely qualify for being duplicate All the content in Flash (with a music track that kicked in on each page load, not so much of an SEO issue - but it helps paint the picture) Keyword stuffing behind the Flash file with a bunch of black text on black background in the style of keyword 1 keyword 2,keyword1,keyword 2,keyword 2 keyword 3 and so on... The exact keyword stuffed combination present on every page of the website A bunch of clearly self made links from poor quality forums and directories with little or no Page Rank Links exchanged across the microsites How do you explain your way out of this when this hard evidence is sat in front of you undermining your great pitch?

    Read the article

  • My blog not even ranking for exact title match [on hold]

    - by Akshay Hallur
    I have original in detail blog posts related to blogging and SEO. This domain has been dropped (expired) 2 times before my acquisition. I am the 3rd owner of the domain since 143 days. Blog posts are not ranking even for exact titles. Google+ or LinkedIn shares will show up instead of my content.Some blog posts are not even indexed. I am hardly getting around 7 organic visits / day. Example 1 : http://www.infoflame.com/offer-pdf-of-blog-posts-for-likes-and-shares/    Title: Offer Readers PDF of Blog Posts for Their Likes and Shares not indexed at all.  Example 2 : http://www.infoflame.com/anchor-text-for-seo/    is indexed but not coming up for the exact title. Suspect: Dropped domain, less likely used for spam( WayBack machine (2 drops) 3 captures since 2004, I don't know whether there was Email spam) (But no manual actions in WMT, so no reconsideration request). What's the reason for this? Should I wait? How can I tell Google that ownership is changed and the domain is now spam-free? or should I de-index it and start a new blog? Thank you, for any advises.

    Read the article

  • How do you promote your blog or website?

    - by zcourts
    I tend to get (what I think are good ideas) and I go out and either build software/websites from scratch or use an existing software/tool such as wordpress. But when I'm done, and even though I get a few users that say they really like it, I can't seem to get my apps out there, or rather get a large set of eyes on it. So I'm interested in knowing how others do it. I read people's stories of how they did this amazing thing and within 2-3 months they're getting thousands or hundreds of thousands of users per month. It just seems to be all smoke and mirrors. So how have you done it? Or anyone you know who has... Does everyone throw lots of money into their promotion, something else?

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >