Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 247/423 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Best option for using the GData APIs on Android?

    - by nyenyec
    What's the least painful and most size efficient way to use the Google Data APIs in an Android application? After a few quick searches t seems that there is an android-gdata project on Google Code that seems to be the work of a single author. I didn't find any documentation for it and don't even know if it's production ready yet. An older option, the com.google.wireless.gdata package seems to have been removed from the SDK. It's still available in the GIT repository. Before I invest too much time with either approach I'd like to know which is the best supported and least painful.

    Read the article

  • Persisting a trie to a file - C

    - by Appu
    I have a trie which I am using to do some string processing. I have a simple compiler which generates trie from some data. Once generated, my trie won't change at run time. I am looking for an approach where I can persist the trie in a file and load it effectively. I have looked at sqllite to understand how they are persisting b-treebut their file format looks bit advanced and I may not need all of those. It'd be helpful if someone can provide some ideas to persist and read the trie. I am programming using C.

    Read the article

  • What is the best practice for writing bookmarklets

    - by Ritesh M Nayak
    I am writing some bookmarklets for a project that I am currently working on and I was wondering what the best practice for writing a bookmarklet was. I did some looking around and this is what I came up with javascript:void((function() { var%20e=document.createElement('script'); e.setAttribute('type','text/javascript'); e.setAttribute('src','http://someserver.com/bookmarkletcode.js'); document.body.appendChild(e) })()) I felt this is nice because the code can always be changed (since its requested every time) and still it acts like a bookmarklet. Are there are any problems to this approach ? Browser incompatibility etc? What is the best practice for this?

    Read the article

  • How to imitate focus on a div element?

    - by Alexis Abril
    The scenario involves imitating a drop down list. Instead of the standard input select element, we're using a set of custom images in combination with a few CSS styles and jQuery behaviors. All in all, we have a drop down list that works just fine. The question revolves around a drop down list losing focus. A normal select element losing focus would automatically close the list, however, since we are using a set of divs, it's not possibly to set or lose focus(and subsequently hide the list). We've been able to imitate normal drop down behavior by hiding a text input behind the div and setting the focus to it dynamically. For example: User clicks drop down list - content list is displayed and hidden text input is set as focus. Event is bound to the text input losing focus which hides the content list. I'm curious if the community has a better approach to hiding a div when a user clicks off, presses escape, etc.

    Read the article

  • github url style

    - by Alex Le
    Hi all, I wanted to have users within my website to have their own URL like http://mysite.com/username (similar to GitHub, e.g. my account is http:// github. com/sr3d). This would help with SEO since every profile is under the same domain, as apposed to the sub-domain approach. My site is running on Rails and Nginx/Passenger. Currently I have a solution using a bunch of rewrite in the nginx.conf file, and hard-coded controller names (with namespace support as well). I can share include the nginx.conf here if you guys want to take a look. I wanted to know if there's a better way of making the URL pretty like that. (If you suggest a better place to post this question then please let me know) Cheers, Alex

    Read the article

  • How to deploy and register a VSPackage supporting multiple versions of Visual Studio (2005, 2008, 20

    - by Steve Cadwallader
    I have an open source VSPackage that I would like to release with support for Visual Studio 2005, Visual Studio 2008, and Visual Studio 2010. I'm trying to figure out how to create the installer and how to perform the package registration with each edition of Visual Studio. The deployment research I've done indicates my best bet for an installer is a VSIX inside an MSI. The registration research I've done is a lot less clear. VSPackage registration seems to differ for every edition (VS2005 uses regpkg, VS2008 uses pkgdef, VS2010 uses VSIX). Can anyone share their experiences and/or point me towards any information about the best approach for targeting multiple versions of Visual Studio? I'm looking for the easiest implementation and preferably keeping it in a single installer if reasonably feasible. Any help would be greatly appreciated!

    Read the article

  • Using RouteExistingFiles to block access to existing files even if no route exists

    - by Michael Stum
    In ASP.net MVC 2, I can use routes.RouteExistingFiles = true; to send all requests through the routing system, even if they exist on the file system. Usually, this ends up hitting the "{controller}/{action}/{id}" route and throws an exception as the controller cannot be found. I do not want to use that route though (I have only a few URLs and they are specifically mapped), yet I would still like to prevent access to the file system. Basically I want to Whitelist pages using IgnoreRoute. Is there a built-in way to do this? My current approach is to still have the "{controller}/{action}/{id}" route and generate a 404 when this is hit, but I'm just wondering if something is built-in already?

    Read the article

  • Best solution for a comment table for multiple content types

    - by KRTac
    I'm currently designing a comments table for a site I'm building. Users will be able to upload images, link videos and add audio files to the profile. Each of these types of content must be commentable. Now I'm wondering what's the best approach to this. My current options are: 1. to have one big comments table and a link tables for every content type (comments_videos, ...) with comment_id and _id. 2. to have comments separated by the type of content their for. So each type of content would have his own comments table with the comments for that type.

    Read the article

  • Like Html.RenderAction() but without reinstantiating the controller object

    - by Jørn Schou-Rode
    I like to use the RenderAction extension method on the HtmlHelper object to render sidebars and the like in pages, as it allows me to keep the data access code for each such part in separate methods on the controller. Using an abstract controller base, I can define a default "sidebar strategy", which can then be refined by overriding the method in a concrete controller, when needed. The only "problem" I have with this approach, is that the RenderAction is built in a way where it always creates a news instance of the controller class, even when rendering actions from the controller already in action. Some of my controllers does some data lookup in their Initialize method, and using the RenderAction method in the view causes this to occur several times in the same request. Is there some alternative to RenderAction which will reuse the controller object if the action method to be invoked is on the same controller class as the "parent" action?

    Read the article

  • Wisdom of merging 100s of Oracle instances into one instance

    - by hoytster
    Our application runs on the web, is mostly an inquiry tool, does some transactions. We host the Oracle database. The app has always had a different instance of Oracle for each customer. A customer is a company which pays us to provide our service to the company's employees, typically 10,000-25,000 employees per customer. We do a major release every few years, and migrating to that new release is challenging: we might have a team at the customer site for a couple weeks, explaining new functionality and setting up the driving data to suit that customer. We're considering going multi-client, putting all our customers into a single shared Oracle 11g instance on a big honkin' Windows Server 2008 server -- in order to reduce costs. I'm wondering if that's advisable. There are some advantages to having separate instances for each customer. Tell me if these are bogus, please. In my rough guess about decreasing importance: Our customers MyCorp and YourCo can be migrated separately when breaking changes are made to the schema. (With multi-client, we'd be migrating 300+ customers overnight!?!) MyCorp's data can be easily backed up and (!!!) restored, without affecting other customers. MyCorp's data is securely separated from their competitor YourCo's data, without depending on developers to get the code right and/or DBAs getting the configuration right. Performance is better because the database is smaller (5,000 vs 2,000,000 rows in ~50 tables). If MyCorp's offices are (mostly) in just one region, then the MyCorp's instance can be geographically co-located there, so network lag doesn't hurt performance. We can provide better service to global clients, for the same reason. In MyCorp wants to take their database in-house, then we can easily export their instance, to get MyCorp their data. Load-balancing is easier because instances can be placed on different servers (this is with a web farm). When a DEV or QA instance is needed, it's easier to clone the real instance and anonymize the data, because there's much less data. Because they're small enough, developers can have their own instance running locally, so they can work on code while waiting at the airport and while in-flight, without fighting VPN hassles. Q1: What are other advantages of separate instances? We are contemplating changing the database schema and merging all of our customers into one Oracle instance, running on one hefty server. Here are advantages of the multi-client instance approach, most important first (my WAG). Please snipe if these are bogus: Less work for the DBAs, since they only need to maintain one instance instead of hundreds. Less DBA work translates to cheaper, our main motive for this change. With just one instance, the DBAs can do a better job of optimizing performance. They'll have time to add appropriate indexes and review our SQL. It will be easier for developers to debug & enhance the application, because there is only one schema and one app (there might be dozens of schema versions if there are hundreds of instances, with a different version of the app for each version of the schema). This reduces costs too. The alternative is having to start every debug session with (1) What version is this customer running and (2) Let's struggle to recreate the corresponding development environment, code and database. (We need a Virtual Machine that includes the code AND database instance for each patch and release!) Licensing Oracle is cheaper because it's priced per server irrespective of heft (or something -- I don't know anything about the subject). The database becomes a viable persistent store for web session data, because there is just one instance. Some database operations are easier with one multi-client instance, like finding a participant when they're hazy about which customer they (or their spouse, maybe) works for: all the names are in one table. Reporting across customers is straightforward. Q2: What are other advantages of having multiple clients in one instance? Q3: Which approach do you think is better (why)? Instance per customer, or all customers in one instance? I'm concerned that having one multi-client instance makes migration near-impossible, and that's a deal killer... ... unless there is a compromise solution like having two multi-client instances, the old and the new. In that case case, we would design cross-instance solutions for finding participants, reporting, etc. so customers could go from one multi-client instance to the next without anything breaking. THANKS SO MUCH for your collective advice! This issue is beyond me -- but not beyond the collective you. :) Hoytster

    Read the article

  • Pre-generating GUIDs for use in python?

    - by rjuiaa1
    I have a python program that needs to generate several guids and hand them back with some other data to a client over the network. It may be hit with a lot of requests in a short time period and I would like the latency to be as low as reasonably possible. Ideally, rather than generating new guids on the fly as the client waits for a response, I would rather be bulk-generating a list of guids in the background that is continually replenished so that I always have pre-generated ones ready to hand out. I am using the uuid module in python on linux. I understand that this is using the uuidd daemon to get uuids. Does uuidd already take care of pre-genreating uuids so that it always has some ready? From the documentation it appears that it does not. Is there some setting in python or with uuidd to get it to do this automatically? Is there a more elegant approach then manually creating a background thread in my program that maintains a list of uuids?

    Read the article

  • Machine Learning Algorithm for Parallel Nodes

    - by FreshCode
    I want to apply machine learning to a classification problem in a parallel environment. Several independent nodes, each with multiple on/off sensors, can communicate their sensor data with the goal of classifying an event defined by a heuristic, training data or both. Each peer will be measuring the same data from their unique perspective and will attempt to classify the result while taking into account that any neighbouring node (or its sensors or just the connection to the node) could be faulty. Nodes should function as equal peers and determine the most likely classification by communicating their results. Ultimately each node should make a decision based on their own sensor data and their peers' data. If it matters, false positives are OK (albeit undesirable) but false negatives are totally unacceptable. Given that each final classification will receive good or bad feedback, what would be an appropriate machine learning algorithm to approach this problem with if the nodes could communicate with each other to determine the most likely classification?

    Read the article

  • Using Parallel Extensions with ThreadStatic attribute. Could it leak memory?

    - by the-locster
    I'm using Parallel Extensions fairly heavily and I've just now encountered a case where using thread locla storrage might be sensible to allow re-use of objects by worker threads. As such I was lookign at the ThreadStatic attribute which marks a static field/variable as having a unique value per thread. It seems to me that it would be unwise to use PE with the ThreadStatic attribute without any guarantee of thread re-use by PE. That is, if threads are created and destroyed to some degree would the variables (and thus objects they point to) remain in thread local storage for some indeterminate amount of time, thus causing a memory leak? Or perhaps the thread storage is tied to the threads and disposed of when the threads are disposed? But then you still potentially have threads in a pool that are longed lived and that accumulate thread local storage from various pieces of code the threads are used for. Is there a better approach to obtaining thread local storage with PE? Thankyou.

    Read the article

  • Drupal Background Image as Block or Node

    - by Marcy Sutton
    Hi There. I am having trouble wrapping my head around how to make editable background images in my custom Drupal 6 theme. My client wants to have a different background image for each main section (which use multiple content types: page, blog, image gallery)... I am wondering, is there a way to make a custom content type into a dynamic background image block? I am using the Image, Image Cache, Views, CCK, Image Assist, Panels, Filefield, and Imagefield modules. What I'd like to do is add a background image field to various content types that allows a user to reference an image from the content library or upload a new image (similar to ImageAssist), and have it apply to a region in my template. Any suggestions on the best approach for this? Thank you!

    Read the article

  • How can I combine sequential expression trees into a fast method?

    - by chillitom
    Suppose I have the following expressions: Expression<Action<T, StringBuilder>> expr1 = (t, sb) => sb.Append(t.Name); Expression<Action<T, StringBuilder>> expr2 = (t, sb) => sb.Append(", "); Expression<Action<T, StringBuilder>> expr3 = (t, sb) => sb.Append(t.Description); I'd like to be able to compile these into a method/delegate equivalent to the following: void Method(T t, StringBuilder sb) { sb.Append(t.Name); sb.Append(", "); sb.Append(t.Description); } What is the best way to approach this? I'd like it to perform well, ideally with performance equivalent to the above method.

    Read the article

  • How to access the jQuery event object in a Seaside callback

    - by Mef
    Basically, I want to translate the following into Seaside Smalltalk: $(".myDiv").bind('click', function(e) { console.log(e); }); Besides that I don't want to console.log the event, but access it in my ajax callback. The most promising approach seemed to be something like html div onClick: (html jQuery ajax callback: [:v | self halt] value: (???); with: 'Foo'. But I couldn't find any way to access the event that caused the callback. Intuitively, I would try html jQuery this event for the ??? part, but the Seaside jQuery wrapper doesn't know any message that comes close to event. Any help is appreciated. There has to be away to access the event data...

    Read the article

  • Preload a lot of tiny pics

    - by clorz
    I'm thinking about how to approach the problem at hand. There's a flash movie that requires a lot of relativly small images and I'm trying to optimize the time it takes for them to be preloaded. One thing I've considered it turning on KeepAlive in Apache on the server side. That works. But my mind still wonders if there's anything else ;-) So, what other approaches I may try? Is there a way to compress all those images and then unpack on client side? I have full control on both server and client side. Can even try installing something other than Apache. Cache is not an option because it already works and it's first time loading that bothers me here.

    Read the article

  • Collision detection, alternatives to "push out"

    - by LaZe
    I'm moving a character (ellipsoid) around in my physics engine. The movement must be constrained by the static geometry, but should slide on the edges, so it won't be stuck. My current approach is to move it a little and then push it back out of the geometry. It seems to work, but I think it's mostly because of luck. I fear there must be some corner cases where this method will go haywire. For example a sharp corner where two walls keeps pushing the character into each other. How would a "state of the art" game engine solve this?

    Read the article

  • Overriding as_json has no effect?

    - by Ola Tuvesson
    I'm trying to override as_json in one of my models, partly to include data from another model, partly to strip out some unnecessary fields. From what I've read this is the preferred approach in Rails 3. To keep it simple, let's say I've got something like: class Country < ActiveRecord::Base def as_json(options={}) super( :only => [:id,:name] ) end end and in my controller simply def show respond_to do |format| format.json { render :json => @country } end end Yet whatever i try, the output always contains the full data, the fields are not filtered by the ":only" clause. Basically, my override doesn't seem to kick in, though if I change it to, say... class Country < ActiveRecord::Base def as_json(options={}) {foo: "bar"} end end ...I do indeed get the expected JSON output. Have I simply got the syntax wrong?

    Read the article

  • Tips on creating a "build a bracelet" app

    - by Felipe Caldas
    I have an idea for an app/game which is basically a build your own bracelet. I will give to the user, say, 5 different types of beads, and the user and drag those beads into a line that is previously drawn in the screen and create his/her bracelet. I am now thinking on the best way to approach this. Would be using the android's OpenGL libraries (and therefore, creating the beads and loading its textures) the best and easiest way? Ultimately, I would like to push new beads into the application whenever I have a new bead texture. Thanks, Felipe

    Read the article

  • How to Obtain Data to Pre-Populate Forms.

    - by Stan
    The objective is to have a form reflect user's defined constraints on a search. At first, I relied entirely upon server-side scripting to achieve this; recently I tried to shift the functionality to JavaScript. On the server side, the search parameters are stored in a ColdFusion struct which makes it particularly convenient to have the data JSON'ed and sent to the client. Then it's just a matter of separately iterating over 'checkable' and text fields to reflect the user's search parameters; jQuery proved to be exceptionally effective in simplifying the workload. One observable difference lies in performance. The second method appeared to be somewhat slower and didn't work in IE8. Evidently, the returned JSON'ed struct was seen as an empty object. I'm sure it can be fixed, though before spending any more time with it, I'm curious to hear how others would approach the task. I'd gladly appreciate any suggestions. --Stan

    Read the article

  • How to get elastic table next to a image?

    - by Pavel Chuchuva
    This is what I want: This is the best I could come up with: CSS img { background: red; float: left; } table { background: yellow; width: 90%; } HTML <img src="image.jpg" width="40" height="40" /> <table> <tr><td>Table</td></tr> </table> There is a problem with this approach. If you resize browser window at some point the table jumps below the image: click to view demo. What is the better way of achieving this layout?

    Read the article

  • Does Team Foundation Server supports Checkpoints?

    - by marco.ragogna
    My dev team used in the past MKS Source Integrity source control and we are not evaluating to migrate to TFS 2010. Some concepts and meaning are a bit different and we need sometime to learn how to do the same things we do before in TFS or how to change our approach. First of all, we used to do Checkpoints for each software release. MKS in this case does a snapshot of all source code files. You can later compare different checkpoints to see the code differences, or extract a whole checkpoint as a build. Does TFS have a similar feature? Do you know where can I read something about it? Thanks in advance, Marco

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • How do I synchronize access to shared memory in LynxOS/POSIX?

    - by GrahamS
    I am implementing two processes on a LynxOS SE (POSIX conformant) system that will communicate via shared memory. One process will act as a "producer" and the other a "consumer". In a multi-threaded system my approach to this would be to use a mutex and condvar (condition variable) pair, with the consumer waiting on the condvar (with pthread_cond_wait) and the producer signalling it (with pthread_cond_signal) when the shared memory is updated. How do I achieve this in a multi-process, rather than multi-threaded, architecture? Is there a LynxOS/POSIX way to create a condvar/mutex pair that can be used between processes? Or is some other synchronization mechanism more appropriate in this scenario?

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >