Search Results

Search found 103627 results on 4146 pages for 'google code'.

Page 339/4146 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Is it possible to have an app in the Play Store that will modify/change the behavior of the Gmail for Android application?

    - by Benjamin Bakhshi
    For example, is it possible for Rapportive (which works on Gmail for web), to work on Gmail for Android. I understand that the UI would be vastly different, but the question remains, is it possible at all to overlay content or change behavior of the Gmail app for android. I have done some research but cannot find the right resources that would tell me if this is possible or not on Android devices. This is a trivial question for Gmail for Web, ie. making a Chrome extension has lots of resources, and services like Rapportive manipulate Gmail apparent ease.

    Read the article

  • Custom trackpad mapping doesn't work for all applications

    - by picheto
    I found out I could invert my trackpad scrolling, so as to work more like the OS X "natural scrolling", which I liked better. To do that, I run the following command on startup: xinput set-button-map 11 1 2 3 5 4 7 6 Where 11 is the touchpad id (found with xinput list and xinput test 11). This inverts the vertical and horizontal two-finger scrolling, and works fine in Terminal, Chrome, Document Viewer, etc. However, it doesn't work in Nautilus and some applications such as the Update Manager, as they keep the usual mapping. I'm running Ubuntu 12.04 x64 Why does this mapping work for some applications but not for others? I know there is software I can download to do the same, but this method seemed "cleaner". Thanks

    Read the article

  • Synchronise graphics and logic code

    - by Skeith
    I have a procedural approach to the game loop that runs various classes. it looks like this: continue any in progress animations check for used input apply AI move things resolve events such as collisions draw it all to screen I have seen a lot of posts about how drawing should be running separately as fast as it can, possibly in another thread. My problem is that if the drawing runs as fast as it, can what happens if it tried to draw while I'm still applying the AI or resolving a collision? It could draw the wrong thing on screen. This seems to be a well established idea so there must be an explanation to this problem as I just cant get my head around it. The only solution I have is to update the screen so fast that any errors like that get refreshed before we see them but that sounds hacky. So how does this work / how would you implement it so that they are in sync but running at different speeds?

    Read the article

  • "Network is unreachable" When pinging google, can connect to internal computers on debian VM

    - by musher
    Similar to this SU question: "Network is unreachable" when attempting to ping google, but internal addresses work Actually, it's pretty much the same base issue. I went through that thread trying to find a solution, I changed my resolv.conf: before: domain [my work domain] search [my work domain] nameserver [my gateway] nameserver [my gateway2] I changed it to: after: domain [my work domain] search [my work domain] nameserver 8.8.8.8 nameserver 8.8.4.4 However, any time I reboot the computer the resolv.conf gets overwritten to the previous version (the 'before' above). The issues began after I installed virtualbox additions, X server and (specifically) LXDE: Cat of apt history.log: Start-Date: 2014-08-21 10:03:42 Commandline: apt-get install virtualbox-guest-utils virtualbox-guest-dkms Install: x11-xkb-utils:amd64 (7.7+1, automatic), libxaw7:amd64 (1.0.12-2, automatic), xfonts-utils:$ End-Date: 2014-08-21 10:03:56 Start-Date: 2014-08-21 10:18:39 Commandline: apt-get install lxde Install: desktop-base:amd64 (7.0.3, automatic), libgoa-1.0-0b:amd64 (3.12.4-1, automatic), lxmenu-d$ End-Date: 2014-08-21 10:21:52 Start-Date: 2014-08-21 10:26:40 Commandline: apt-get upgrade Upgrade: libio-socket-ssl-perl:am ifconfig on the guest: root@Peridot:~# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:89:c9:20 og inet addr:172.31.2.102 Bcast:172.31.2.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe89:c920/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2281 errors:0 dropped:1 overruns:0 frame:0 TX packets:463 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:266507 (260.2 KiB) TX bytes:120554 (117.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) The adapter in VBox is a bridged adapter directly onto my ethernet connection; as are my other 2 VMs (which work) Other SU questions I've tried: "connect: Network is unreachable" in VirtualBox VM

    Read the article

  • Auto-scaling EC2 Servers and Updating Code

    - by jstats
    We've come to the point where we need to set up autoscaling for our web server and I'm unsure how to go about the process of scaling servers and updating the the existing code without remaking a new AMI and changing the autoscale config to use it. I've read a bit about people bundling the new code and uploading it to s3 and having new servers grab the bundle on boot up but that doesn't seem all that pleasant either. Currently the web app's files live in a git repo, and when we update the code, we push it to github, ssh into the web app and run a hook to bring down the latest code. So I was thinking that another option could be to just run that hook on an hourly or daily cron task. Unfortunately that doesn't cover everything type of update (for example new blog posts' images and such which aren't included in the git repo) but it's something. Could anyone provide some advice on what a common solution is or anything as to why my proposed solution is a bad idea? Thanks all

    Read the article

  • How to conduct A/B split testing with AdSense?

    - by None
    Ok so I have decided to A/B test my AdSense ads. I have run a few tests, but I don't know what conclusion to draw and how to keep track of things. Some specific questions: If I have 2 test units, 1 wins. I test that with a new and so on. How do I find if say the fifth one did better than the first one? How do I keep track of things? Do I let the variables independent of each other, because they certainly are not. In real life, font size can affect CTR even if the colors are different. I can test blue color with red color, and then test Arial font with Georgia, but how do I know which combination is the best? This would result in way too many test units. I tried Googling a lot, but I could not find answers to these questions.

    Read the article

  • What should NOT be included in comments? (opinion on a dictum by the inventor of Forth)

    - by AKE
    The often provocative Chuck Moore (inventor of the Forth language) gave the following advice (paraphrasing): "Use comments sparingly. Programs are self-documenting, with a modicum of help from mnemonics. Comments should say WHAT the program is doing, not HOW." My question: Should comments say WHY the program is doing what it is doing? Update: In addition to the answers below, these two provide additional insight. 1: Beginner's guide to writing comments? 2: http://programmers.stackexchange.com/a/98609/62203

    Read the article

  • Is this the most effect simple way to display a moving image? SDL2

    - by user36324
    I've looked around for tutorials on SDL2, but there isnt many so I am curious i was messing around and is this an effective way to move an image. One problem is that it drags along the image to where it moves. #include "SDL.h" #include "SDL_image.h" int main(int argc, char* argv[]) { bool exit = false; SDL_Init(SDL_INIT_EVERYTHING); SDL_Window *win = SDL_CreateWindow("Hello World!", 100, 100, 640, 480, SDL_WINDOW_SHOWN); SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC); SDL_Surface *png = IMG_Load("character.png"); SDL_Rect src; src.x = 0; src.y = 0; src.w = 161; src.h = 159; SDL_Rect dest; dest.x = 50; dest.y = 50; dest.w = 161; dest.h = 159; SDL_Texture *tex = SDL_CreateTextureFromSurface(ren, png); SDL_FreeSurface(png); while(exit==false){ dest.x++; SDL_RenderClear(ren); SDL_RenderCopy(ren, tex, &src, &dest); SDL_RenderPresent(ren); } SDL_Delay(5000); SDL_DestroyTexture(tex); SDL_DestroyRenderer(ren); SDL_DestroyWindow(win); SDL_Quit(); }

    Read the article

  • Troisième trimestre au delà des prévisions chez Google, son action dépasse les 1000 dollars pour la première fois de son histoire

    Troisième trimestre au delà des prévisions chez Google, son action dépasse les 1000 dollars pour la première fois de son histoire Google a publié des résultats supérieurs à ses prévisions pour son troisième trimestre. Pour la première fois de son histoire, l'action Google est passé au-dessus des 1000 dollars ce vendredi 18 octobre. Rappelons qu'en 2004, l'action valait 100 dollars. De plus, son bénéfice net a augmenté de 36% et la firme enregistre 2,97 milliards de dollars contre 2,18 milliards...

    Read the article

  • Looking for a code Plugin !!

    - by GrumpyOldDBA
    SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON GO IF NOT EXISTS( SELECT 1 FROM INFORMATION_SCHEMA . TABLES WHERE TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'MSPaymentForExtraction' ) BEGIN CREATE TABLE [dbo] . [MSPaymentForExtraction] ( [MSPaymentID] [int] NOT NULL IDENTITY ( 1 , 1 ) NOT FOR REPLICATION ) ON [PRIMARY] END GO...(read more)

    Read the article

  • Chrome not accepting international dead keys 14.04

    - by D3L
    Every other application on 14.04 accepts that I have selected US international with dead keys as my keyboard layout option, and accepts text input as it should. Chrome however fails to recognise what keyboard I have set in system settings and blindly uses "US keyboard". Looking for a solution to force Chrome to accept dead key input. AFAIK it used to work, but something has messed up recently with updates to Chrome

    Read the article

  • Google is still crawling and indexing my old, dummy, test pages which now are 404 not found

    - by Ace
    I have set up my site with sample pages and data (lorem ipsum, etc..) and Google has crawled these pages. I deleted all these pages and actually added real content but in webmaster tools, i still get a lot of 404 errors Google trying to crawl these pages. I have set them to "mark as resolved" but some pages still come back as 404. Furthermore, I have a lot of these sample pages still listed when i do a search of my site on Google. How to remove them. I think these irrelevant pages are hurting my rating. I actually wanted to erase all these pages and start getting my site being being indexed as a new one but I read it's not possible? (I have submitted a sitemap and used "Fetch as Google.")

    Read the article

  • Plongée dans les entrailles de l'outil Person Finder de Google, une API open-source codée en Python

    Plongée dans les entrailles de l'outil Person Finder de Google, une API open-source codée en Python Mise à jour du 14.03.2011 par Katleen Comme indiqué dans la news précédente, Google a lancé son outil Person Finder à destination des personnes concernées par le drame survenu le 11.03.2011 au Japon (victimes et entourage de victimes). Ce service a déjà servi auparavant, lors des sinistres de Haiti ou de Christchurch par exemple. En fait, il est né à l'initiative de la firme comme projet sur Google.org, dans le cadre du secteur Google Crisis Response qui y a été lancé en janvier 2010 (séisme d'Haïti), en réponse ...

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

  • Quel avenir pour la boite à outils graphique GWT avec le désengagement progressif de Google ? Venez vous exprimer

    Lors du dernier Google IO (https://docs.google.com/presentation...1#slide=id.p18), le leader technique de GWT Ray Cromwell a annoncé le début du désengagement progressif de Google dans le développement de la boîte à outils graphique. Ce désengagement n'est pas surprenant puisque de nombreuses lacunes de la boîte à outils obligent les développeurs à compléter leur code avec des projets communautaires (Cf sondage sur les bibliothèques de composants graphiques). C'est sûremen...

    Read the article

  • How do you plan your asynchronous code?

    - by NullOrEmpty
    I created a library that is a invoker for a web service somewhere else. The library exposes asynchronous methods, since web service calls are a good candidate for that matter. At the beginning everything was just fine, I had methods with easy to understand operations in a CRUD fashion, since the library is a kind of repository. But then business logic started to become complex, and some of the procedures involves the chaining of many of these asynchronous operations, sometimes with different paths depending on the result value, etc.. etc.. Suddenly, everything is very messy, to stop the execution in a break point it is not very helpful, to find out what is going on or where in the process timeline have you stopped become a pain... Development becomes less quick, less agile, and to catch those bugs that happens once in a 1000 times becomes a hell. From the technical point, a repository that exposes asynchronous methods looked like a good idea, because some persistence layers could have delays, and you can use the async approach to do the most of your hardware. But from the functional point of view, things became very complex, and considering those procedures where a dozen of different calls were needed... I don't know the real value of the improvement. After read about TPL for a while, it looked like a good idea for managing tasks, but in the moment you have to combine them and start to reuse existing functionality, things become very messy. I have had a good experience using it for very concrete scenarios, but bad experience using them broadly. How do you work asynchronously? Do you use it always? Or just for long running processes? Thanks.

    Read the article

  • What happened to the this type of naming convention?

    - by Smith
    I have read so many docs about naming conventions, most recommending both Pascal and Camel naming conventions. Well, I agree to this, it's ok. This might not be pleasing to some, but I am just trying to get your opinion on why you name your objects and classes in a certain way. What happened to this type of naming conventions, and/or why are they bad? I want to name a structure, and I prefix it with "struct". My reason is that, with IntelliSense, I see all structures in one place, and anywhere I see the struct prefix, I know it's a "struct": structPerson structPosition another example is the enum, although I may not prefix it with "enum", but maybe with "enm": enmFruits enmSex again my reason is that in IntelliSense, I see all my enumerations in one place. Because .NET has so many built-in data structures, I think this helps me do less searching. Note that I used .NET in this example, but I welcome language agnostic answers.

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Scroll with Middle Click

    - by Anast
    Can i use the middle click mouse button so i can scroll down faster like windows? I mean when i am in windows and i press the middle click somewhere i can scroll in a 4 way pointer shows on the screen and when i move the mouse in any direction the window will scroll in that direction even without pressing the middle click. Is there anything like this in ubuntu? How can i do this esspecially in chrome!

    Read the article

  • Référencement : Google déploie un nouvel algorithme qui pénalise les sites utilisant des noms de domaine avec des mots clés

    Référencement : Google déploie un nouvel algorithme qui pénalise les sites utilisant des noms de domaine avec des mots clés Via un message sur son compte Twitter, le spécialiste de l'équipe anti WebSpam de Google, Matt Cutts, a annoncé le déploiement d'un nouvel algorithme sur le moteur Google. Ce nouvel algorithme vise à affiner les résultats de recherche du moteur Google en pénalisant les sites ayant un contenu de faible qualité malgré que le nom de domaine du site corresponde aux termes de la recherche. En effet, la firme a constaté que des sites Web ont recourt à des noms de domaines qualifiés de « exact-match domain » qui reprennent des termes de recherche susceptibles...

    Read the article

  • Google ouvre les portes de ses Data Centers au public pour la première fois, via visite virtuelle

    Google ouvre les portes de ses Data Centers au public Pour la première fois, via visite virtuelle Google ouvre les portes de ses grands centres de données classés jusqu'alors top secret, où on découvre l'étendue des infrastructures qui propulsent la constellation des services offerts par Google. Une visite virtuelle à travers des photos des centres de données de Google inc. (situés aux États-Unis, en Finlande et en Belgique) est disponible sur un site lancé pour l'occasion. Une visite plus captivante du centre de données de la Caroline du Nord est fournie par l'outil "Street View". [IMG]http://idelways.developpez.com/news/images/datacenter-goog...

    Read the article

  • Use of another country domain name can influence search engines results?

    - by DontVoteMeDown
    I'm studing a way to create my company domain based on it's name. Consider that my company's name is Another Store and I want to register a domain like anothersto.re - this is just an example. That domain is strictly chosen by marketing. What happens is that my company is stabilished in Brazil and our domain here is .br. The .re domain stands for an island near France so haves nothing to do with my country. If that domain is chosen what it can imply about SEO questions? Did it will have any influence on search engines results considering that they look over user's region? This kind of domain use became common between modern companies - and marketing strategies - and that is why I'm considering it.

    Read the article

  • What ever happened to the Defense Software Reuse System (DSRS)?

    - by emddudley
    I've been reading some papers from the early 90s about a US Department of Defense software reuse initiative called the Defense Software Reuse System (DSRS). The most recent mention of it I could find was in a paper from 2000 - A Survey of Software Reuse Repositories Defense Software Repository System (DSRS) The DSRS is an automated repository for storing and retrieving Reusable Software Assets (RSAs) [14]. The DSRS software now manages inventories of reusable assets at seven software reuse support centers (SRSCs). The DSRS serves as a central collection point for quality RSAs, and facilitates software reuse by offering developers the opportunity to match their requirements with existing software products. DSRS accounts are available for Government employees and contractor personnel currently supporting Government projects... ...The DoD software community is trying to change its software engineering model from its current software cycle to a process-driven, domain-specific, architecture-based, repository-assisted way of constructing software [15]. In this changing environment, the DSRS has the highest potential to become the DoD standard reuse repository because it is the only existing deployed, operational repository with multiple interoperable locations across DoD. Seven DSRS locations support nearly 1,000 users and list nearly 9,000 reusable assets. The DISA DSRS alone lists 3,880 reusable assets and has 400 user accounts... The far-term strategy of the DSRS is to support a virtual repository. These interconnected repositories will provide the ability to locate and share reusable components across domains and among the services. An effective and evolving DSRS is a central requirement to the success of the DoD software reuse initiative. Evolving DoD repository requirements demand that DISA continue to have an operational DSRS site to support testing in an actual repository operation and to support DoD users. The classification process for the DSRS is a basic technology for providing customer support [16]. This process is the first step in making reusable assets available for implementing the functional and technical migration strategies. ... [14] DSRS - Defense Technology for Adaptable, Reliable Systems URL: http://ssed1.ims.disa.mil/srp/dsrspage.html [15] STARS - Software Technology for Adaptable, Reliable Systems URL: http://www.stars.ballston.paramax.com/index.html [16] D. E. Perry and S. S. Popovitch, “Inquire: Predicate-based use and reuse,'' in Proceedings of the 8th Knowledge-Based Software Engineering Conference, pp. 144-151, September 1993. ... Is DSRS dead, and were there any post-mortem reports on it? Are there other more-recent US government initiatives or reports on software reuse?

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >