Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 401/626 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • CSV parser in C++

    - by User1
    All I need is a good CSV file parser for C++. At this point it can really just be a comma-delimited parser (ie don't worry about escaping new lines and commas). The main need is a line-by-line parser that will return a vector for the next line each time the method is called. I found this article which looks quite promising: http://www.boost.org/doc/libs/1_35_0/libs/spirit/example/fundamental/list_parser.cpp I've never used Boost's Spirit, but am willing to try it. Is it overkill/bloated or is it fast and efficient? Does anyone have faster algorithms using STL or anything else? Thanks!

    Read the article

  • Hudson: where to download file and stop specific builds running ?

    - by Kim Jong Woo
    I have a file that is generated inside (hudson server) /var/lib/hudson/jobs/jobtitle/1/out.txt I need to fetch this file, but doing a GET request for http://myhudson:8090/job/jobtitle/1/out.txt doesn't actually locate the file. Basically, I have another box that will grab this file from the hudson server. This box will make the out.txt file available for download. Another challenge is the build number directories. How would I be able to use the hudson API to stop or delete the specific builds running ? I am forced to do iterate through all build numbers to send STOP or DELETE api call in php using wget to do the REST API call. This is not very efficient. for ($i=0; $i < 3000; $i++){ exec('wget -O /dev/null "http://myhudson:8090/job/' . 'jobtitle' . '/$i/stop"'); }

    Read the article

  • Is it faster to count down than it is to count up?

    - by Bob
    Our computer science teacher once said that for some reason it is more efficient to count down that count up. For example if you need to use a FOR loop and the loop index is not used somewhere (like printing a line of N * to the screen) I mean that code like this : for (i=N; i>=0; i--) putchar('*'); is better than: for (i=0; i<N; i++) putchar('*'); Is it really true? and if so does anyone know why?

    Read the article

  • Adding new "columns" to csv data file in Tcl

    - by George
    Hi All, I am dealing with a "large" measurement data, approximately 30K key-value pairs. The measurements have number of iterations. After each iteration a datafile (non-csv) with 30K kay-value pairs is created. I want to somehow creata a csv file of form: Key1,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... ... Now, I was wondering about efficient way of adding each iteration mesurement data as a columns to csv file in Tcl. So, far it seems that in either case I will need to load whole csv file into some variable(array/list) and work on each element by adding new measurement data. This seems somewhat inefficient. Is there another way, perhaps?

    Read the article

  • Stringification of a macro value

    - by SF.
    I faced a problem - I need to use a macro value both as string and as integer. #define RECORDS_PER_PAGE 10 /*... */ #define REQUEST_RECORDS \ "SELECT Fields FROM Table WHERE Conditions" \ " OFFSET %d * " #RECORDS_PER_PAGE \ " LIMIT " #RECORDS_PER_PAGE ";" char result_buffer[RECORDS_PER_PAGE][MAX_RECORD_LEN]; /* ...and some more uses of RECORDS_PER_PAGE, elsewhere... */ This fails with a message about "stray #", and even if it worked, I guess I'd get the macro names stringified, not the values. Of course I can feed the values to the final method ( "LIMIT %d ", page*RECORDS_PER_PAGE ) but it's neither pretty nor efficient. It's times like this when I wish the preprocessor didn't treat strings in a special way and would process their content just like normal code. For now, I cludged it with #define RECORDS_PER_PAGE_TXT "10" but understandably, I'm not happy about it. How to get it right?

    Read the article

  • Effective way of String spliting C#

    - by openidsujoy
    I have a completed string like this N:Pay in Cash++RGI:40++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP:~ ~N:ERedemption++RGI:42++R:200++T:Purchase++IP:N++IS:N++PD:PC++UCP:598.80++UPP:0.00++TCP:598.80++TPP:0.00++QE:1++QS:1++CPC:USD++PPC:Points++D:Y++E:Y++IFE:Y++AD:Y++IR:++MV:++CP: this string is like this It's list of PO's(Payment Options) which are separated by ~~ this list may contains one or more PO contains only Key-Value Pairs which separated by : spaces are denoted by ++ I need to extract the values for Key "RGI" and "N". I can do it via for loop , I want a efficient way to do this. any help on this.

    Read the article

  • JS Framework that doesn't use CSS selectors?

    - by RoToRa
    A thing that I noticed about most JavaScript frameworks is that the most common way to find/access the DOM elements is to use CSS selectors. However this usually requires the framework to include a CSS selector parser, because they need to support selectors, that the browser natively doesn't, foremost the frameworks own proprietary extensions. I would think that these parsers are large and slow. Wouldn't it be more efficient to have something that doesn't require a parser, such a chained method calls? Some like: id("example").children().class("test").hasAttribute("href") instead of $("#example > .test[href]") Are there any frameworks around that do something like this? And how do they compare with jQuery and friends in regard to performance and size? EDIT: You can consider this a theoretical discussion topic. I don't plan to use anything other than jQuery in any practical projects in near furure. I was just wondering why there aren't any other, possibly better approaches.

    Read the article

  • C# GroupJoin effectiveness

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • Is there a single query that can update a "sequence number" across multiple groups?

    - by Drarok
    Given a table like below, is there a single-query way to update the table from this: | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | NULL | | 2 | 1 | 2010-04-27 | NULL | | 3 | 2 | 2010-04-28 | NULL | | 4 | 3 | 2010-04-28 | NULL | To this (note that created_at is used for ordering, and sequence is "grouped" by type_id): | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | 1 | | 2 | 1 | 2010-04-27 | 2 | | 3 | 2 | 2010-04-28 | 1 | | 4 | 3 | 2010-04-28 | 1 | I've seen some code before that used an @ variable like the following, that I thought might work: SET @seq = 0; UPDATE `log` SET `sequence` = @seq := @seq + 1 ORDER BY `created_at`; But that obviously doesn't reset the sequence to 1 for each type_id. If there's no single-query way to do this, what's the most efficient way? Data in this table may be deleted, so I'm planning to run a stored procedure after the user is done editing to re-sequence the table.

    Read the article

  • Mysql search design

    - by neil
    I'm designing a mysql database, and i'd like some input on an efficient way to store blog/article data for searching. Right now, I've made a separate column that stores the content to be searched - no duplicate words, no words shorter than four letters, and no words that are too common. So, essentially, it's a list of keywords from the original article. Also searched would be a list of tags, and the title field. I'm not quite sure how mysql indexes fulltext columns, so would storing the data like that be ineffective, or redundant somehow? A lot of the articles are on the same topic, so would the score be hurt by so many of the rows having similar keywords? Also, for this project, solutions like sphinx, lucene or google custom seach can't be used -- only php & mysql. Thanks!

    Read the article

  • Efficiently Reshaping/Reordering Numpy Array to Properly Ordered Tiles (Image)

    - by Phelix
    I would like to be able to somehow reorder a numpy array for efficient processing of tiles. what I got: >>> A = np.array([[1,2],[3,4]]).repeat(2,0).repeat(2,1) >>> A # image like array array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) >>> A.reshape(2,2,4) array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) what I want: X >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [4, 4, 4, 4]]]) to be able to do something like: >>> X[X.sum(2)>12] -= 1 >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3]]]) Is this possible without a slow python loop? Bonus: Conversion back from X to A Edit: How can I get X from A?

    Read the article

  • How can an SQL query return data from multiple tables

    - by Fluffeh
    I would like to know how to get data from multiple tables in my database, what types of methods are there to do this, what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • efficiently compare two BitArrays of the same length

    - by BobTurbo
    How would I do this? I am trying to count when both arrays have the same value of TRUE/1 at the same index. As you can see, my code has multiple bitarrays and is looping through each one and comparing them with a comparisonArray with another loop. It doesn't seem to be very efficient and I need it to be. foreach (bitArrayTuple in bitarryList) { for (int i = 0; i < arrayLength; i++) if (bArrayTuple.Item2[i] && comparisonArray[i]) bitArrayTuple.Item1++; } where Item1 is the count and Item2 is a bitarray.

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

  • How can I call an executable to run on a separate machine within a program on my own machine (win xp

    - by Mr. H.
    My objective is to write a program which will call another executable on a separate computer(all with win xp) with parameters determined at run-time, then repeat for several more computers, and then collect the results. In short, I'm working on a grid-computing project. The algorithm itself being used is already coded in FORTRAN, but we are looking for an efficient way to run it on many computers at once. I suppose one way to accomplish this would be to upload a script to each computer and then run said script on each computer, all automatically and dependent on my own parameters. But how can I write a program which will write to, upload, and run a script on a separate computer? I had considered GridGain, but the algorithm is already coded and in a different language, so that is ruled out. My current guess at accomplishing this task is using Expect (wiki/Expect), but I have no knowledge of the tool. Any advice appreciated.

    Read the article

  • Custom deleters for std::shared_ptrs

    - by Kristian D'Amato
    Is it possible to use a custom deleter after creating a std::shared_ptr without using new? My problem is that object creation is handled by a factory class and its constructors & destructors are protected, which gives a compile error, and I don't want to use new because of its drawbacks. To elaborate: I prefer to create shared pointers like this, which doesn't let you set a custom deleter (I think): auto sp1 = make_shared<Song>(L"The Beatles", L"Im Happy Just to Dance With You"); Or I can create them like this, which does let met set a deleter through an argument: auto sp2(new Song, MyDeleterFunc); But the second one uses new, which AFAIK isn't as efficient as the top sort of allocation. Maybe this is clearer: is it possible to get the benefits of make_shared<> as well as a custom deleter? Would that mean having to write an allocator?

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • Any tips of how to handle hierarchial trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • Image expire time

    - by Jens
    The google page speed tool recommends me to set 'Expires' headers for images etc. But what is the most efficient way to set an Expires header for an image? In now redirect all image requests to an imagehandler.php using htaccess: /* HTTP/1.1 404 Not Found, HTTP/1.1 400 Bad Request and content type detection stuff ... */ header( "Content-Type: " . $content_type ); header( "Cache-Control: public" ); header( "Last-Modified: ".gmdate("D, d M Y H:i:s", filemtime($path))." GMT"); header( "Expires: ". date("r",time() + (60*60*24*30))); readfile( $path ); But of course this adds extra loading time for my images on first request, and I was wondering if there was a better solution for this.

    Read the article

  • Store data in an inconvenient table or create a derived table?

    - by user1705685
    I have a certain predefined database structure that I am stuck with. The question is whether this structure is OK for ORM or I whether should add a processing layer that would create a more convenient structure every time something is inserted into the original DB. To simplify, here's what it kind of looks like. I have a person table: PersonId Name And I have a properties table: PersonId PropertyType PropertyValue So, for person John Doe... (1, 'John Doe') ...I could have three properties: (1, 'phone', '555-55-55'), (1, 'email', '[email protected]), (1, 'type', 'employee') By using ORM I would like to get a "person" object that would have properties "name", "phone", "email", "type". Can Propel do that? How efficient is it? Is it a better idea to create a table with columns "phone", "email", "type" and fill it automatically as new rows are inserted into the properties table?

    Read the article

  • Best strategy for HTML parcial rendering based on multiple dropdown values

    - by pv2008
    I have a View that renders something like this: "Item 1" and "Item 2" are <tr> elements from a table. After the user change "Value 1" or "Value 2" I would like to call a Controller and put the result (some HTML snippet) in the div marked as "Result of...". I have some vague notions of JQuery. I know how to bind to the onchage event of the Select element, and call the $.ajax() function, for example. But I wonder if this can be achieved in a more efficient way in ASP.NET MVC2.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >