Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 392/626 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Put an object in Handler message

    - by Tsimmi
    Hi! I need to download an image from the internet, in a different thread, and then send that image object in the handler message, to the UI thread. I already have this: ... Message msg = Message.obtain(); Bundle b = new Bundle(); b.putParcelable("MyObject", (Parcelable) object); msg.setData(b); handler.sendMessage(msg); And when I receive this message, I want to extract the object: ... public void handleMessage(Message msg) { super.handleMessage(msg); MyObject objectRcvd = (MyObject) msg.getData().getParcelable("IpTile"); addToCache(ipTile); mapView.invalidate(); } But this is giving me: ...java.lang.ClassCastException... Can anyone help? And by the way, is this the most efficient way to pass an object to the UI Thread? Thank you all!

    Read the article

  • Efficiency of Java code with primitive types

    - by super89
    Hello! I want to ask which piece of code is more efficient in Java? Code 1: void f() { for(int i = 0 ; i < 99999;i++) { for(int j = 0 ; j < 99999;j++) { //Some operations } } } Code 2: void f() { int i,j; for(i = 0 ; i < 99999;i++) { for(j = 0 ; j < 99999;j++) { //Some operations } } } My teacher said that second is better, but I can't agree that opinion.

    Read the article

  • Code behind methods vs. Jquery AJAX calls

    - by punkouter
    Theres a war brewing I can feel it! Old school coders are used to having every server control create events in the .cs files.. for example.. Getting the Initial load of data, Saving Data, Deleting data... and then binding datasources to the server control.. New school coders want to do it in Jquery + AJAX calls to .svc files... That gives automatic no post backs so that is a advantage... and I think its a different way of thinking.. All of a sudden the UI related events are all being done in Jquery.. What is the most modern and efficient way to go ? How can I convince the old school coders to let us you this new paradigm ? (assuming it is the better way)

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Perform Grouping of Resultset in Code

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it?

    Read the article

  • Update tableview instantly as data pushed in core data iphone

    - by user336685
    I need to update the tableview as soon as the content is pushed in core data database. for this AppDelegate.m contains following code NSManagedObjectContext *moc = [self managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; [request setEntity:[NSEntityDescription entityForName:@"FeedItem" inManagedObjectContext:moc]]; //for loop // push data in code data & then save context [moc save:&error]; ZAssert(error == nil, @"Error saving context: %@", [error localizedDescription]); //for loop ends This code triggers following code from RootviewController.m - (void)controllerWillChangeContent:(NSFetchedResultsController*)controller { [[self tableView] beginUpdates]; } But this updates the tableview only at the end of the for loop ,the table does not get updated after immediate push in db. I tried following code but that didn't work - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { // In the simplest, most efficient, case, reload the table view. [self.tableView reloadData]; } I have been stuck with this problem for several days.Please help.Thanks in advance for solution.

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

  • Sequence Generators in T-SQL

    - by PaoloFCantoni
    We have an Oracle application that uses a standard pattern to populate surrogate keys. We have a series of extrinsic rows (that have specific values for the surrogate keys) and other rows that have intrinsic values. We use the following Oracle trigger snippet to determine what to do with the Surrogate key on insert: 'IF :NEW.SurrogateKey IS NULL THEN SELECT SurrogateKey_SEQ.NEXTVAL INTO :NEW.SurrogateKey FROM DUAL; END IF;' If the supplied surrogate key is null then get a value from the nominated sequence, else pass the supplied surrogate key through to the row. I can't seem to find an easy way to do this is T-SQL. There are all sorts of approaches, but none of which use the notion of a sequence generator like Oracle and other SQL-92 compliant DBs do. Anybody know of a really efficient way to do this in SQL Server T-SQL? BTW we're using SQL Server 2008 if that's any help. TIA, Paolo

    Read the article

  • python interactive web data/forms/interface communicating with remote server

    - by decipher
    What's an efficient method (preferably simple as well) for communicating with a remote server and allowing the user to 'interact' with it (IE submit commands, user interface) via the web browser (IE a text box to input commands, and an text area for output, or various command-less abstracted interfaces)? I have the 'standalone' python code finished for communicating and working(terminal/console based right now). My primary concern is with re-factoring the code to suite the web, which involves establishing a connection (python sockets), and maintaining the connection while the user is logged on. some further details: currently using django framework for the basic back end/templates.

    Read the article

  • C# Getting Just Date From Timestamp

    - by Soo
    If I have a timestamp in the form: yyyy-mm-dd hh:mm:ss:mmm How can I just extract the date from the timestamp? For instance, if a timestamp reads: "2010-05-18 08:36:52:236" what is the best way to just get 2010-05-18 from it. What I'm trying to do is isolate the date portion of the timestamp, define a custom time for it to create a new time stamp. Is there a more efficient way to define the time of the timestamp without first taking out the date, and then adding a new time?

    Read the article

  • ideas for algorithm? sorting a list randomly with emphasis on variety

    - by Steve Eisner
    I have a table of items with [ID,ATTR1,ATTR2,ATTR3]. I'd like to select about half of the items, but try to get a random result set that is NOT clustered. In other words, there's a fairly even spread of ATTR1 values, ATTR2 values, and ATTR3 values. This does NOT necessarily represent the data as a whole, in other words, the total table may be generally concentrated on certain attribute values, but I'd like to select a subset with more variety. The attributes are not inter-related, so there's not really a correlation between ATTR1 and ATTR2. Any ideas for an efficient algorithm? Thanks! I don't really even know how to search for this :)

    Read the article

  • Task management algorithm in C#

    - by silverwizz
    Hi guys, i am looking for efficient task management fo C# what i mean by task management is executing pre-defined interval time of task. Example: task a needs to be run every 1 mins task b needs to be run every 3 mins task c needs to be run every 5 mins these tasks can be added and removed in arbitary time... And the task that i mentioned can be 100000 or more... The task will bw executed forever until it is removed... Do u guys familiar with this kind of algorithm? I am thinking to implement in either c# or php.... Thanks

    Read the article

  • Code understanding, reverse engineering, best concepts and tools. Java.

    - by core07
    One of most demanding tasks for any programmer, architect is understanding other's code. E.g. I am contractor, hired to rescue some project very quickly. Fix bugs, plan global refactoring and therefore I need most efficient way to understand the code. What is the list of concepts, their priority and best tools for this? Of what I know: reverse code engineering to create object models (creating of diagram per package is not so convenient), create sequence diagrams (the tool connects in debug mode to the system and generates diagrams from runtime). Some visualizing techniques, using some tools to work not just with .java but also with e.g. JPA implementors like Hibernate. Generate diagram for not all the codebase, but add some class and then classes used by it. Is Sparx Enterprise Architect state of the art in reverse engineering or far from that. Any other better tools? Ideally would be that tool makes me understand the code as if I wrote it myself :)

    Read the article

  • posmax: like argmax but gives the position(s) of the element x for which f[x] is maximal

    - by dreeves
    Mathematica has a built-in function ArgMax for functions over infinite domains, based on the standard mathematical definition. The analog for finite domains is a handy utility function. Given a function and a list (call it the domain of the function), return the element(s) of the list that maximize the function. Here's an example of finite argmax in action: http://stackoverflow.com/questions/471029/canonicalize-nfl-team-names/472213#472213 And here's my implementation of it (along with argmin for good measure): (* argmax[f, domain] returns the element of domain for which f of that element is maximal -- breaks ties in favor of first occurrence. *) SetAttributes[{argmax, argmin}, HoldFirst]; argmax[f_, dom_List] := Fold[If[f[#1]>=f[#2], #1, #2]&, First[dom], Rest[dom]] argmin[f_, dom_List] := argmax[-f[#]&, dom] First, is that the most efficient way to implement argmax? What if you want the list of all maximal elements instead of just the first one? Second, how about the related function posmax that, instead of returning the maximal element(s), returns the position(s) of the maximal elements?

    Read the article

  • How to do a range query

    - by Walter H
    I have a bunch of numbers timestamps that I want to check against a range to see if they match a particular range of dates. Basically like a BETWEEN .. AND .. match in SQL. The obvious data structure would be a B-tree, but while there are a number of B-tree implementations on CPAN, they only seem to implement exact matching. Berkeley DB has the same problem; there are B-tree indices, but no range matching. What would be the simplest way to do this? I don't want to use an SQL database unless I have to. Clarification: I have a lot of these, so I'm looking for an efficient method, not just grep over an array.

    Read the article

  • All Permutations of a string when corresponding characters are not in the same place

    - by r20rock
    I need all possible permutations of a given string such that no character should remain at the same place as in the input string. Eg. : for input "ask" Output: all possible permutaions like "ksa", "kas"... such that 'a' is not in the 1st position , 's' is not in the 2nd positions and so on... in any permutation. I only need the count of such possible permutations I can do this by generating all permutations and filtering them but I need a very efficient way of doing this. All characters in the string are "UNIQUE" Preferred language C++.

    Read the article

  • VS2010's "Public Property <PropertyName> As <DataType> vs. Public var

    - by Velika2
    In VS2008, I used to type Public Property <PropName> As <dataType> and hit the Enter key and the IDE editor would automatically expand it out to a full blown property block. Now, from what I understand, a new feature of 2010 is that the compiler automatically "expands" the short syntax above into the same IL code that you would get with the full property GET AND SET sub methods that were are accustomed to seeing before in the editor. But functionality, how the heck is this any different than just having a Public class level variable? If the only diff is what it compiles to and if otehrwise there is no functional difference, isn't the new way less efficient than the old since it involves more code than just having a class level memory variable? Public as I thought that if you weren't going to have code behind your properties that they were essentially the same. I guess the diffrenece is that they just added the keyword "Property" but functionality, their is no diff, eh?

    Read the article

  • Is closing/disposing an SqlDataReader needed if you are already closing the sqlconnection?

    - by Brian
    I noticed This question, but my question is a bit more specific. Is there any advantage to using using (SqlConnection conn = new SqlConnection(conStr)) { using (SqlCommand command = new SqlCommand()) { // dostuff } } instead of using (SqlConnection conn = new SqlConnection(conStr)) { SqlCommand command = new SqlCommand(); // dostuff } Obviously it does matter run more than one command with the same connection, since closing an SqlDataReader is more efficient than closing and reopening a connection (calling conn.Close();conn.Open(); will also free up the connection). I see many people insist that failure to close the DataReader means leaving open connection resources around, but doesn't that only apply if you don't close the connection?

    Read the article

  • Delete all records that have no foreign key constraints

    - by Rodney Burton
    I have a SQL 2005 table with millions of rows in it that is being hit by users all day and night. This table is referenced by 20 or so other tables that have foreign key constraints. What I am needing to do on a regular basis is delete all records from this table where the "Active" field is set to false AND there are no other records in any of the child tables that reference the parent record. What is the most efficient way of doing this short of trying to delete each one at a time and letting it cause SQL errors on the ones that violate constraints? Also it is not an option to disable the constraints and I cannot cause locks on the parent table for any significant amount of time.

    Read the article

  • Should XML be used server-side, and JSON client-side?

    - by Michel Carroll
    As a personal project, I'm making an AJAX chatroom application using XML as a server-side storage, and JSON for client-side processing. Here's how it works: AJAX Request gets sent to PHP using GET (chat messages/logins/logouts) PHP fetches/modifies the XML file on the server PHP encodes the XML into JSON, and sends back JSON response Javascript handles JSON information (chat messages/logins/logouts) I want to eventually make this a larger-scale chatroom application. Therefore, I want to make sure it's fast and efficient. Was this a bad design choice? In this case, is switching between XML and JSON ok, or is there a better way?

    Read the article

  • Can Goldberg algorithm in ocamlgraph be used to find Minimum Cost Flow graph?

    - by Tautrimas
    I'm looking for an implementation to the Minimum Cost Flow graph problem in OCaml. OCaml library ocamlgraph has Goldberg algorithm implementation. The paper called Efficient implementation of the Goldberg-Tarjan minimum-cost flow algorithm is noting that Goldberg-Tarjan algorithm can find minimum cost graph. Question is, does ocamlgraph algorithm also find the minimum cost? Library documentation only states, that it's suitable at least for the maximum flow problem. If not, does anybody have a good link to a nice any minimum cost optimization algorithm code? I will manually translate it into OCaml then. Forgive me, if I missed it on Wikipedia: there are too many algos on flow networks for the first day!

    Read the article

  • Using PHP, can I put variables inside of variables?

    - by Rob
    For example, take this code: $ch = curl_init($resultSet['url']."?get0=get0&get1=".$get1."&get2=".$get2."&get3=".$get3); This of course, looks very ugly, and kind of a pain in the ass to read. So my question is, would I be able to use something like this: $allgets ="?act=phptools&host=".$host."&time=".$duration."&port=".$port; $ch = curl_init($resultSet['url'] . $allgets); Very simple question I suppose, but my server is undergoing maintenance, so I can't upload it and test it myself. I suppose a yes or no answer will suffice, but if you have a more efficient way of doing this, that would be even better. :)

    Read the article

  • Delphi: how to efficently read a big binary file, converting it to hexadecimal for passing it as a v

    - by user193655
    I need to convert a binary file (a zip file) into hexadecimal representation, to then send it to sql-server as a varbinary(max) function parameter. A full example (using a very small file!) is: 1) my file contains the following bits 000011110000111 2) I need a procedure to QUICKLY convert it to 0F0F 3) I will call a sql server function passing 0x0F0F as parameter The problem is that I have large files (up to 100MB, even if average file size is 100KB files are possible), so I need the fastest way to do this. Otherwise stated: I need to create the string '0x'+BinaryDataInHexadecimalRepresentation in the most efficient way. Related question: passing hexadecimal data to sql server

    Read the article

  • mvc contrib pager question - AsPagination

    - by csetzkorn
    Hi, I might be wrong but is the AsPagination method not very inefficient as it sucks all the data from a repository first to initialise TotalItems etc.? So paging is not used to make the data access more efficient. I have not found any examples which use a repository and 'true' paging (i.e. uses TOP etc. in the atcual SQL). How do I use the Pager if I have a repository method with this signature: IList GetData(int? page, out int TotalItems) Any feedback would be very much appreciated. Thanks. Christian

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >