Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 46/361 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • More efficient method for grabbing all child units

    - by Hazior
    I have a table in SQL that links to itself through parentID. I want to find the children and their children and so forth until I find all the child objects. I have a recursive function that does this but it seems very ineffective. Is there a way to get sql to find all child objects? If so how?

    Read the article

  • Collision detections and how efficient they are

    - by Shadow
    How exactly do you implement collision detection? What are the costs involved? Do different platforms(c/c++, java, cocoa/iphone, flash, directX) have different optimizations for calculating collisions. And lastly are there libraries available to do this for me, or some that I can just interpret for my platform of choice? As I understand it you would need to loop through the collision map and find the area in question and then compair the input thing(e.g. a sprite) to the type of pixel that is in the questioned area. I understand the very basic idea, but I don't understand the underlying implementation or even a higher level one for that matter. It would seem that this type of detection, or any for that matter, is very costly. Tile map? Bit array? How are these created from an image(I would guess looping and doing stuff)? The reason I ask this question is to get a better understanding of the efficiency behind the scenes and to understand exactly what is going on. Links, references, or examples would be very helpful. I know this question is a bit longwinded so any help or references would be very welcome. Thanks SO!

    Read the article

  • Ruby efficient way of building an array from an array of arrays

    - by randombits
    I have an array of ActiveRecord objects, each one which has its own respective errors array. I want to flatten it all out and get only the unique values into one array. So the top level array might look like: foo0 = Foo.new foo1 = Foo.new foo2 = Foo.new foo3 = Foo.new arr = [foo0, foo1, foo2, foo3] Each one of those objects could potentially have an array of errors, and I'd like to get just the unique message out of them and put them in another array, say called error_arr. How would you do it the "Ruby" way?

    Read the article

  • What's the most efficient way to setup a multi-lingual website

    - by Jasper De Bruijn
    Hi, I'm developing a website that will be available in different languages. It is a LAMP (Linux, Apache, MySQL, PHP) setup, and it makes use of Smarty, mostly for the template engine. The way we currently translate is by a self-written smarty plugin, which will recognize certain tags in the HTML files, and will find the corresponding tag in an earlier defined language file. The HTML could look as follows: <p>Hi, welcome to $#gamedesc;!</p> And the language file could look like this: gamedesc:Poing 2009$; welcome:this is another tag$; Which would then output <p>Hi, welcome to Poing 2009!</p> This system is very basic, but it is pretty hard to control, if I f.e. would like to keep track of what has been translated so far, or give certain users the rights to translate only certain tags. I've been looking at some alternative ways to approach this, by either replacing the text-file with XML files which could store some extra meta-data, or by perhaps storing all the texts in the database, and retrieving it there. My question is, what would be the best way to make this system both maintainable and perform well with high user-traffic? Are there perhaps any (lightweight) plugins I could take a look at?

    Read the article

  • Most Efficient Alternative Method of Storing Settings for iPhone Apps

    - by JPK
    I am not using the Settings bundle to store the settings for my app, as I prefer to allow the user to access the settings within the app (they may be changed fairly often). I do realize that there is the option to do both, but for now, I am trying to find the most optimal place to store the settings within the app. I have a good number of settings (from what I have read, probably too many for NSUserDefaults), and the two main options I am considering are: 1) storing the settings in a dictionary in the plist, loading the settings into a NSDictionary property in the app delegate and accessing them via the sharedDelegate 2) storing the settings in a Core Data entity (1 row on Settings entity), loading the settings into a Settings object in the app delegate and accessing them via the sharedDelegate Of these two, which would be the optimal method, performance wise?

    Read the article

  • How to shift pixels of a pixmap efficient in Qt4

    - by stanleyxu2005
    Hello, I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap) My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap. Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?

    Read the article

  • Storing a bucket of numbers in an efficient data structure

    - by BlitzKrieg
    I have a buckets of numbers e.g. - 1 to 4, 5 to 15, 16 to 21, 22 to 34,.... I have roughly 600,000 such buckets. The range of numbers that fall in each of the bucket varies. I need to store these buckets in a suitable data structure so that the lookups for a number is as fast as possible. So my question is what is the suitable data structure and a sorting mechanism for this type of problem. Thanks in advance

    Read the article

  • Python: Most efficient way to concatenate and rearrange files

    - by user300890
    Hi, I am reading from several files, each file is divided into 2 pieces, first a header section of a few thousand lines followed by a body of a few thousand. My problem is I need to concatenate these files into one file where all the headers are on the top followed by the body. Currently I am using two loops; one to pull out all the headers and write them, and the second to write the body of each file (I also include a tmp_count variable to limit the number of lines to be loading into memory before dumping to file). This is pretty slow - about 6min for 13gb file. Can anyone tell me how to optimize this or if there is a faster way to do this in python ? Thanks! Here is my code: def cat_files_sam(final_file_name,work_directory_master,file_count): final_file = open(final_file_name,"w") if len(file_count) > 1: file_count=sort_output_files(file_count) # only for @ headers for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 else: final_file.writelines(tmp_list) break for bowtie_file in file_count: #print bowtie_file tmp_list = [] tmp_count = 0 for line in open(os.path.join(work_directory_master,bowtie_file)): if line.startswith("@"): continue if tmp_count == 1000000: final_file.writelines(tmp_list) tmp_list = [] tmp_count = 0 tmp_list.append(line) tmp_count += 1 final_file.writelines(tmp_list) final_file.close()

    Read the article

  • Good data structure for efficient insert/querying on arbitrary properties

    - by Juliet
    I'm working on a project where Arrays are the default data structure for everything, and every query is a linear search in the form of: Need a customer with a particular name? customer.Find(x => x.Name == name) Need a customer with a particular unique id? customer.Find(x => x.Id == id) Need a customer of a particular type and age? customer.Find(x => x is PreferredCustomer && x.Age >= age) Need a customer of a particular name and age? customer.Find(x => x.Name == name && x.Age == age) In almost all instances, the criteria for lookups is well-defined. For example, we only search for customers by one or more of the properties Id, Type, Name, or Age. We rarely search by anything else. Is a good data structure to support arbitrary queries of these types with lookup better than O(n)? Any out-of-the-box implementations for .NET?

    Read the article

  • whats faster, more efficient, loading a js file with arrays or populating arrays from tables

    - by Leigh
    I am rebuilding an ecom site where the product data is stored in a multidimensional JS array that gets loaded on page load. This data is constantly being accessed with JS due to the nature of the site, to update prices based on user selections. There are many options that affect final price. From a programming standpoint, a DB table is much easier to maintain and update than are JS arrays, and since I am porting the site over to PHP and MYSQL, I have been considering moving these arrays into tables. So, would it be better to populate an array from the DB on load so that the pricing data is always available to the JS, or stay with hard coded JS files? I considered getting data via ajax as needed, but since this site has to constantly update pricing with user interaction, I have pretty much ruled that out. How would you handle it?

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • Efficient persistent storage for simple id to table of values map for java

    - by wds
    I need to store some data that follows the simple pattern of mapping an "id" to a full table (with multiple rows) of several columns (i.e. some integer values [u, v, w]). The size of one of these tables would be a couple of KB. Basically what I need is to store a persistent cache of some intermediary results. This could quite easily be implemented as simple sql, but there's a couple of problems, namely I need to compress the size of this structure on disk as much as possible. (because of amount of values I'm storing) Also, it's not transactional, I just need to write once and simply read the contents of the entire table, so a relational DB isn't actually a very good fit. I was wondering if anyone had any good suggestions? For some reason I can't seem to come up with something decent atm. Especially something with an API in java would be nice.

    Read the article

  • most efficient method of turning multiple 1D arrays into columns of a 2D array

    - by Ty W
    As I was writing a for loop earlier today, I thought that there must be a neater way of doing this... so I figured I'd ask. I looked briefly for a duplicate question but didn't see anything obvious. The Problem: Given N arrays of length M, turn them into a M-row by N-column 2D array Example: $id = [1,5,2,8,6] $name = [a,b,c,d,e] $result = [[1,a], [5,b], [2,c], [8,d], [6,e]] My Solution: Pretty straight forward and probably not optimal, but it does work: <?php // $row is returned from a DB query // $row['<var>'] is a comma separated string of values $categories = array(); $ids = explode(",", $row['ids']); $names = explode(",", $row['names']); $titles = explode(",", $row['titles']); for($i = 0; $i < count($ids); $i++) { $categories[] = array("id" => $ids[$i], "name" => $names[$i], "title" => $titles[$i]); } ?> note: I didn't put the name = value bit in the spec, but it'd be awesome if there was some way to keep that as well.

    Read the article

  • Memory efficient collection class

    - by Joe
    I'm building an array of dictionaries (called keys) in my iphone application to hold the section names and row counts for a tableview. the code looks like this: [self.results removeAllObjects]; [self.keys removeAllObjects]; NSUInteger i,j = 0; NSString *key = [NSString string]; NSString *prevKey = [NSString string]; if ([self.allResults count] > 0) { prevKey = [NSString stringWithString:[[[self.allResults objectAtIndex:0] valueForKey:@"name"] substringToIndex:1]]; for (NSDictionary *theDict in self.allResults) { key = [NSString stringWithString:[[theDict valueForKey:@"name"] substringToIndex:1]]; if (![key isEqualToString:prevKey]) { NSDictionary *newDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:i],@"count", prevKey,@"section", [NSNumber numberWithInt:j], @"total",nil]; [self.keys addObject:newDictionary]; prevKey = [NSString stringWithString:key]; i = 1; } else { i++; } j++; } NSDictionary *newDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:i],@"count", prevKey,@"section", [NSNumber numberWithInt:j], @"total",nil]; [self.keys addObject:newDictionary]; } [self.tableview reloadData]; The code works fine first time through but I sometimes have to rebuild the entire table so I redo this code which orks fine on the simulator, but on my device the program bombs when I execute the reloadData line : malloc: *** mmap(size=3772944384) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug malloc: *** mmap(size=3772944384) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Program received signal: “EXC_BAD_ACCESS”. If I remove the reloadData line the code works on the device. I'm wondering if this is something to do with the way I've built the keys array (ie using autoreleased strings and dictionaries).

    Read the article

  • Is there a more efficient way to retrieve tables from websites using wpf + webclient

    - by Jordan Brooker
    new to the community, but here is my first question that I am stuck on. I am new to WPF and WebClient using c# and I am attempting to make a program that access www.nba.com to populate a combobox I have with team names, and then when a user selects a team from the combobox, I wanted to populate a portion of the main window with the roster from the teams home site, same style and eveything. I was able to populate the combobox using the WebClient.OpenRead and reading in the markup to extract the team names. Now I am on the more difficult part. I was planning on using the same method to grab all the markup and then somehow display the table in a content panel, but I feel that this is a very tedious thing to do. Can anyone give me any tips for completing this action or is there a method in the webclient class that allows me to search a webpage for a table or object other than text? Thanks.

    Read the article

  • Most efficient way to write over file after reading

    - by Ryan McClure
    I'm reading in some data from a file, manipulating it, and then overwriting it to the same file. Until now, I've been doing it like so: open (my $inFile, $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... close ($inFile); open (my $outFile, $file) or die "Could not open $file: $!"; print $outFile, $retString; close ($inFile); However I realized I can just use the truncate function and open the file for read/write: open (my $inFile, '+<', $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... truncate $inFile, 0; print $inFile $retString; close ($inFile); I don't see any examples of this anywhere. It seems to work well, but am I doing it correctly? Is there a better way to do this?

    Read the article

  • Simple and efficient distribution of C++/Boost source code (amalgamation)

    - by Arrieta
    Hello: My job mostly consists of engineering analysis, but I find myself distributing code more and more frequently among my colleagues. A big pain is that not every user is proficient in the intricacies of compiling source code, and I cannot distribute executables. I've been working with C++ using Boost, and the problem is that I cannot request every sysadmin of every network to install the libraries. Instead, I want to distribute a single source file (or as few as possible) so that the user can g++ source.c -o program. So, the question is: can you pack the Boost libraries with your code, and end up with a single file? I am talking about the Boost libraries which are "headers only" or "templates only". As an inspiration, please look at the distribution of SQlite or the Lemon Parser Generator; the author amalgamates the stuff into a single source file which is trivial to compile. Thank you.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >