Search Results

Search found 35007 results on 1401 pages for 'google font api'.

Page 286/1401 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Two different domains as one user session

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • help installing some java and flash

    - by william
    ok so i have git now and some other stuff but ive looked around and all and i dont get very help full info Instructions 1 Download the latest version of Flash Player from the Adobe Website. Click on "Download the Latest Player," and choose ".deb for Ubuntu" from the drop down list. 2 Choose "Save File" from the pop up window. Sponsored Links Google Cloud Hosting Build And Run Your App Using Google App Engine cloud.google.com/appengine 3 Open a terminal window. The terminal window will be found under Applications - Utilities. 4 Change to the directory where your downloaded file was saved. cd Download 5 Install the libcurl3 library. sudo apt-get install libcurl3 6 Install the Flash Player. sudo dpkg --install install_flash_player__linux.deb Replace with the latest version number. 7 Restart Firefox. 8 Check that Flash Player was installed. Type about:plugins in a browser window, and look for the flash player MIME type. Read more: http://www.ehow.com/how_5068467_install-flash-player-ubuntu.html#ixzz2ixPj47qS what i dont is how the heck i change the directory and all that crap witch they say every one know

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • Here Is Official GMail App For iPhones & iPads

    - by Gopinath
    Its a great day for GMail users! Few hours ago Google pushed a new GMail web user interface to all the users and now they released GMail iOS App[iTunes Link]. After delaying several years Google at last released a native GMail application of iPhone, iPad & iPod touch. In a blog post, Google says We’ve combined your favorite features from the Gmail mobile web app and iOS into one app so you can be more productive on the go.It’s designed to be fast, efficient and take full advantage of the touchscreen and notification capabilities of your device. The iOS App includes almost all the features that are found on Android version of GMail app -  users can star, label, archive, access the Priority Inbox and push notifications for new mail alerts. It also includes standard touchscreen commands like pull down to refresh and swipe to scroll emails. Go and grab the GMail App from iTunes This article titled,Here Is Official GMail App For iPhones & iPads, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Barcodes and Bugs

    - by Tim Dexter
    A great mail from Mike at Browning last week. He has been through the ringer getting his BIP barcoding sorted out but he's now out of the woods. Here's the final result. By way of explanation, an excerpt from Mike's email:   This is an example of the GS1_128 carton shipping labels we are now producing with BIP in our web application for our vendors who drop ship products to our dealers. It produces 4 labels per printed page, in PDF format, on peel & stick label paper. Each label has a unique carton number, and a unique carton serial number in the SSCC-18 barcode. This example is for Cabelas (each customer has slightly different GS1-128 label format requirements – custom template for each - a pain!). I am using custom java encoders I wrote for the UPC and SSCC-18 barcodes, and a standard encoder (code128b) for the ShipTo zip barcode. Is there any way yet to get around that SUPER ANNOYING bug when opening the rtf template in MS Word, and it replaces my xsl code text in the barcode fields with gibberish??? Every time I open it I have to re-enter all the xsl code. Not only to be able to read & edit it, but also to get it to work in BIP (BIP doesn’t like the gibberish if I upload the template that has it). Mike's last point, regarding the annoying bug in the template builder, is one that I have experienced occasionally. The development team have looked at it and found it to be an issue with MSWord and not a plugin problem. That's all well and good but how can you get around it? Well, you can take advantage of the font mapping that BIP offers to get the barcodes into the PDF output. As many of you know, getting a barcode font to appear in the PDF output, you need employ the use of the xdo.cfg file in the template builder config directory.You would normally have an entry such as this:         <font family="Code 128" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>to map a barcode font to get it to render in the PDF output when testing from the template builder plugin.   Mike's issue is only present when the formfield is highlighted with a barcode font. The other fields in the template are OK. What you can do to get around the issue is to bend the config entry to get around having to use the barcode font in the template at all. Changing the entry to something like:         <font family="Calibri" style="normal" weight="normal">        <truetype path="C:\windows\fonts\128R00.TTF" />       </font>   Note that we are mapping the Calibri; a humanly readable and non 'erroring' font in the template, to the code 128 barcode font. Where you used to highlight the field with the barcode in MSWord, you now use the Calibri font instead. At run time, BIP will go look for the Calibri font mapping and will drop in the Code128 font. Of course, Calibri is an example; you need to pick a font that you are not going to use any where else in the layout.

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • Double vs Single Quotes in Chrome

    - by Rodrigo
    So when you want to embed google docs on a site you are given this chunk of code: <iframe width='500' height='300' frameborder='0' src='https://docs.google.com/spreadsheet/pub?hl=en_US&hl=en_US&key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&output=html&widget=true'></iframe> This works fine on my site. If you edit the page, we run the new content through some filters to escape out stuff and make sure it is valid html. After the process, the link above gets converted to this: <iframe frameborder="0" height="300" src="https://docs.google.com/spreadsheet/pub?hl=en_US&amp;hl=en_US&amp;key=0AiV6Vq32hBZIdHZRN3EwWERLZHVUT25ST01LTGxubWc&amp;output=html&amp;widget=true" width="500"></iframe> This will work on every browser except for chrome. Chrome thinks I am running JS in the src. I narrowed it down to a combination of double quotes and escaped '&' symbols. If i revert one of those back to the original state, the iframe works. I work in ruby where ' and " have different behaviors. Is Chrome doing the same thing? Is there a way to turn that off?

    Read the article

  • Naming a predicate: "precondition" or "precondition_is_met"?

    - by RexE
    In my web app framework, each page can have a precondition that needs to be satisfied before it can be displayed to the user. For example, if user 1 and user 2 are playing a back-and-forth role-playing game, user 2 needs to wait for user 1 to finish his turn before he can take his turn. Otherwise, the user is displayed a waiting page. This is implemented with a predicate: def precondition(self): return user_1.completed_turn The simplest name for this API is precondition, but this leads to code like if precondition(): ..., which is not really obvious. Seems to me like it is more accurate to call it precondition_is_met(), but not sure about that either. Is there a best practice for naming methods like this?

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • iOS: Versioned static frameworks vs Git Submodules and included code

    - by drekka
    For the last couple of years I've been building static frameworks of common APIs for my iOS projects. I can build a universal binary containing all the architectures (i386, armv6, armv7) and wrap it up in a .framework directory structure. I then stored this in a directory based on the version of the framework. For example ..../myAPI/v0.1.0/myAPI.framework Once I have this framework I can then easily add it to a project and if I want to advance the version, merely change the framework search paths to the later version. This works, but the approach is very similar to what I would use in the Java world. Recently I've been reading about using Git submodules and static framework sub projects in XCode 4. Im wondering if my currently approach is something that I should consider retiring and what the pros/cons are of the new approach. I'm weary of just including code because I've already had issues in a work project which had (effectively) multiple versions of a third party API. Any opinions?

    Read the article

  • Amazon Affiliate search using a movie title

    - by Matt Walker
    I am currently working on a movie trailer site. I have over 300 movies and I do not want to add an amazon affiliate link to each one individually. Does amazon offer any sort of api that will allow me to use a movie title to search for a dvd on amazon? Ex. For the movie skyfall, the amazon affiliate link would be amazon.com/search/dvd/skyfall/affiliateid ^ I just made the link up as i don't know how their system works, but I just want it to do a search on the movie title Thanks in advance for any help you can give me!

    Read the article

  • asp.net website development component / APIs

    - by Haseeb Asif
    I have been assigned a new website project to work on in my organization where my role demand to finalize all the tools/technologies/controls/api etc. That website will something like online store, where every user has his online store as subdomain e.g. user1.myprojectdomain.com I have been researching a number of things to use and need your suggestions in following levels ASP.NET web forms vs Asp.net MVC: Prefering asp.net webforms due to following reason with N Tier Architecture Rapid application Development large set of Toolbox/controls And mainly due to our team skill set Errorlogging Elmah seems to be a nice library Forums Forums Yetanotherforums On line Live Chat still looking for something (Working on SignalR) Signups with Social Media Engage by Janrain And I need help that how can Manage sub domains. Do we create a Virtual Directory/application for every user in the IIS on runtime or we can do some thing else

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • Is YQL still used?

    - by Andrea
    A few years ago, following the explosion of custom web APIs from various services around the world, Yahoo! launched the YQL service, which allows to query data from a variety of different providers with a unified Yahoo Query Language. Having worked with the APIs of Twitter, Instagram, Facebook, Google Maps, YouTube and more, I very much enjoy the idea. However, I do not see it mentioned often, and of course this is one of those efforts that makes sense when enough people follow it and expose their API through this layer. Are there some statitics of usage? Or some declarations from Yahoo about the destiny of YQL? I would also be interested in hearing your experience if you have tried it directly.

    Read the article

  • Should functions of a C library always expect a string's length?

    - by Benjamin Kloster
    I'm currently working on a library written in C. Many functions of this library expect a string as char* or const char* in their arguments. I started out with those functions always expecting the string's length as a size_t so that null-termination wasn't required. However, when writing tests, this resulted in frequent use of strlen(), like so: const char* string = "Ugh, strlen is tedious"; libFunction(string, strlen(string)); Trusting the user to pass properly terminated strings would lead to less safe, but more concise and (in my opinion) readable code: libFunction("I hope there's a null-terminator there!"); So, what's the sensible practice here? Make the API more complicated to use, but force the user to think of their input, or document the requirement for a null-terminated string and trust the caller?

    Read the article

  • page rank 0 penalty

    - by mark
    I have a wordpress blog and a www-website on the same domain for about one year. Together it is about 170 pages. The page rank is still 0. I understand that page rank 0 is a penalty for duplicate content. The pages are indexed in google but still no page rank. In google webmaster tools there is no indication for any problem. I asked for reconsideration of both blog and website a month ago. Google accepted the reconsideration but it did not change anything. Other pages of similar size and similar audience earn PR 4-6. Is there something I can do in order to get a fair page rank? A coworker told me that it might be the case that a link farm is using the content and I can do nothing about it. Is there a reliable way to check for something like that? I do not like to give up so quickly is there a chance to fix this by for example moving to another domain?

    Read the article

  • How to generate Visa checkout token? [on hold]

    - by Muhammad Junaid
    I am on process of creating a Visa checkout plugin but stuck in generating token Here are the token requirment: Format: Alphanumeric; maximum 100 characters in the form of token: x:UNIX_UTC_Timestamp:SHA256_hash, where UNIX_UTC_Timestamp is a UNIX Epoch timestamp SHA256_hash is an SHA256 hash of the following unseparated items: Your shared secret Timestamp from the transaction; exactly the same as UNIX_UTC_Timestamp Resource path (API name). This HTTPS request's query string Note: The query string includes one or more parameters in name-value pair format, whose names are separated from values by equal signs (=); an empty value may be omitted but the name and equal sign must be present. The initial question mark (?) is not included. Note: All parameters must be present. The parameters must be in lexicographic sort order (UTF-8, uppercase hex characters) with parameters separated from each other by an ampersand (&). Note: The query string must be URL encoded (excepting the following characters, per RFC 3986: hyp You can find on Google "visa checkout developer updating 1 px image"

    Read the article

  • Tracking subdomains in the same profile as the main domain

    - by Osvaldo
    I have a site, let's call it http://www.example.com with a non-universal Google analytics account. Now we have to add new functionalities in a subdomain like https://subdomain.example.com as a micro site. On that subdomain the URL's will be something like https://subdomain.example.com?param1=foo&param2=bar We can't change the requirements as both main site and mini-site use a different CMS/application. This is strictly a Google Analytics question. But we need to count pageviews and events that happen in that subdomain (with URLs like https://subdomain.example.com?param1=foo&param2=bar) as belonging to the main domain. So pageviews and events in https://subdomain.example.com?param1=foo&param2=bar need to be recorded as if they happened in http://www.example.com/path/to/whatever/I/want Fortunately we have full control on JavaScript in the main domain site and in the subdomain site too. How can we make this work? Do we need to change tracking code both in the main domain and subdomains? Do we need to reconfigure Google Analytics? Please note again that we do not want to create a new view for the subdomain. Both mini-site and main site should be in the same account, property and view.

    Read the article

  • 301 redirect: Is this good or bad for 2 domains?

    - by Tim
    Since i couldn't find any appropriate answer to my specific question, I wanted to ask you. I've read alot of things about the 301-redirect for moving pages and so on. A customer of mine has booked a new domain last year for better search results (he included his main keyword into the domain. Before he had only a domain with his business name, which had nothing to say about what he does). I told him, that he should do a 301-redirect so he doesn't loose his position in Google and to redirect all new customers coming from the old domain to the new domain. After about one year where his site hat a good amount of traffic the search results of Google for his keywords are getting more worse. Since he didn't maintain his website (no new content, bad content on all pages and so on) I assumed this would be the problem. He gave his website to another company which also makes websites. They told him, that this 301-redirection is very bad for his website. They removed it, and also updated his content and the template so now he has the same meta keywords on every page (instead of the specific ones I put there before). He also removed the canonical-tag which I placed there to ensure no duplicate content. What I am now afraid of is, that without this redirect Google now will find duplicate content and therefore kick him out of the index, which would be a nightmare, since most of his customers come over his website. I need verification of the fact, that the 301 isn't bad but in fact the correct way of working with 2 domains. If possible with good sources I can point out to him since he don't wants to hear anything about this. If someone also has a few words about the keywords and the canonical-tag I would really appreciate it! Thank you very much!

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • Tracking form abandonment

    - by Alec Sanger
    I'm looking for a decent way to track form abandonment. Ideally, I would like to see how many people start filling out a form but do not complete it, as well as the last field that was filled out. The website is a fairly large Wordpress site with quite a few forms. Some of these forms are to register for events, some are for donations, some are for information requests. My first attempt at this was adding a generic jquery that bound functions to all forms on the site. When a form element was blurred, I would trigger a Google Analytics event with the name of the form, the name of the field, and whether or not it was filled. I expected to be able to go to the Event Flow section in Google Analytics and see the flow of these form events, however since there are so many forms and other events occurring on the website, Google wouldn't let me break them out very well. The other issue was the Quform doesn't name their fields anything relevant, and it doesn't look like we can name them ourselves. This results in a lot of ugly form names that don't mean anything without cross-referencing the actual form. Does anybody have any suggestions on how I can achieve more usable form abandonment metrics in a scenario like this?

    Read the article

  • Credit Card Payment Processing which APIs do you use?

    - by user3330840
    It's for a Point of Sale Terminal where the customer will bring the physical credit card and it will be swiped through the terminal. The business has a merchant account on some banks. So, how do I start accepting credit cards in my app? The credit cards that needs to be accepted include: visa, master-card, amex, discover. Which APIs do I need to use? The programming language doesn't matter it can be in any programming languages Java/C#/C++/Python or anything. Will there be a single API or multiple APIs that need to be integrated? (I know some about PCI compliance and security encryption)

    Read the article

  • SEO Blog Indexing : Dot Wordpress Versus a Registered Domain?

    - by rumspringa00
    I've used Wordpress for a few of my client's sites, mostly small businesses and ecommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using Wordpress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot Wordpress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domian would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong? Thanks in advance!

    Read the article

  • Googlebot visit but no cache update - why?

    - by Mick
    I have made a new plain vanilla HTML website. I have been making regular modifications to it on an almost daily basis. The site is hosted by hostmonster and as part of their service they offer "awstats" to let you know assorted details of visitors to the site. One thing is puzzling me. According to awstats, a "robot/spider" calling itself "Googlebot" visited my site as recently as today (28th June 2011), but when I find my site on google (e.g. by searching for "full reserve banking") the cache is dated only the 5th June. I always thought that a visit from the google robot was synonymous with a cache update. Am I wrong? Or have I accidentally put something in the site telling google that nothing has been updated? EDIT: It seems a moderator has removed the name of my website, so there is now no chance that anyone could check out if I had made some error on my site :-( ... but anyway, in answer to paulmorriss' question, here is what aw stats was telling me:

    Read the article

  • Where is the source of domain search? [closed]

    - by All
    There are several websites providing service of searching for free domains (websites, not registrars). I wonder where is the source of these searches? This search cannot be based on local database, as it needs live data (of available domains). The only possible way (to me) is to fetch every query from the original NIC (e.g. nic.com), but I was unable to find an API for this service. How to find a source to write a script for domain searching?

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >