Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 265/361 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Is shortening properties names worth it?

    - by raam86
    in how to node Blog rolling with node.js and mongoDB the author mentions it's a good idea to shorten proprieties names: ....oft-reported issue with mongoDB is the size of the data on the disk... each and every record stores all the field-names .... This means that it can often be more space-efficient to have properties such as 't', or 'b' rather than 'title' or 'body', however for fear of confusion I would avoid this unless truly required! I am aware of solutions of how to do it I am more intrested in when is it truly required?

    Read the article

  • How to scale thumbnail to fit depending on tall or wide photo - Attachment_fu

    - by adamwstl
    I'm using attachment_fu and Rmagick to create thumbnails after upload. The thumbnail is a fixed 135 x 135 px and I'm currently forcing the width to 135px on all photos. The problem is that if it's a wide and fat photo is has to stretch the height awkwardly. Current Attachment_fu setup class PhotoImage < Image belongs_to :photo has_attachment :content_type => :image, :size => 0..5.megabytes, :storage => :s3, :resize_to => '650x>', :thumbnails => { :thumbnail => '135x>' }#:geometry => 'x50' } validates_as_attachment end Here's what I'm trying to do: Thanks

    Read the article

  • Pre loaded database on iPhone?

    - by Julian
    Hi, I have recently developed an app using core data as the storage db. The app allowed the user to read and write to the db. I am now developing a new app which the user doesnt need to write anything to the db, instead the app just needs to read the data. The data has relationships etc so cannot just use a plist or something similar. My question is should I use core data for such a requirement and if so how would i go about entering the data and then releasing the app. Would I have to code the data entry which would populate the db then remove all this code (as I dont want the database to repopulate every time the user opens the app)?? Is there a way to create a core data model using sql commands as with sqlite ie insert into..... etc? Any ideas/thoughts would be very helpful. Many thanks Jules

    Read the article

  • Adding new "columns" to csv data file in Tcl

    - by George
    Hi All, I am dealing with a "large" measurement data, approximately 30K key-value pairs. The measurements have number of iterations. After each iteration a datafile (non-csv) with 30K kay-value pairs is created. I want to somehow creata a csv file of form: Key1,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... ... Now, I was wondering about efficient way of adding each iteration mesurement data as a columns to csv file in Tcl. So, far it seems that in either case I will need to load whole csv file into some variable(array/list) and work on each element by adding new measurement data. This seems somewhat inefficient. Is there another way, perhaps?

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • trouble setting up TreeViews in pygtk

    - by Chris H
    I've got some code in a class that extends gtk.TreeView, and this is the init method. I want to create a tree view that has 3 columns. A toggle button, a label, and a drop down box that the user can type stuff into. The code below works, except that the toggle button doesn't react to mouse clicks and the label and the ComboEntry aren't drawn. (So I guess you can say it doesn't work). I can add rows just fine however. #make storage enable/disable label user entry self.tv_store = gtk.TreeStore(gtk.ToggleButton, str, gtk.ComboBoxEntry) #make widget gtk.TreeView.__init__(self, self.tv_store) #make renderers self.buttonRenderer = gtk.CellRendererToggle() self.labelRenderer = gtk.CellRendererText() self.entryRenderer = gtk.CellRendererCombo() #make columns self.columnButton = gtk.TreeViewColumn('Enabled') self.columnButton.pack_start(self.buttonRenderer, False) self.columnLabel = gtk.TreeViewColumn('Label') self.columnLabel.pack_start(self.labelRenderer, False) self.columnEntry = gtk.TreeViewColumn('Data') self.columnEntry.pack_start(self.entryRenderer, True) self.append_column(self.columnButton) self.append_column(self.columnLabel) self.append_column(self.columnEntry) self.tmpButton = gtk.ToggleButton('example') self.tmpCombo = gtk.ComboBoxEntry(None) self.tv_store.insert(None, 0, [self.tmpButton, 'example label', self.tmpCombo]) thanks.

    Read the article

  • How do I know if my PHP application is using too much memory?

    - by John
    I'm working on a PHP web application that let's users network with each other, book events, message etc... I launched it a few months ago and at the moment, there's only about 100 users. I set up the application on a VPS with ubuntu 9.10, apache 2, mysql 5 and php 5. I had 360 Mb of RAM, but upgraded to 720 MB a few minutes ago. Lately, my web application has been experiencing outages due to excessive memory usage. From what I can tell in error logs, it seems the server automatically kills apache processes that consume too much memory. As a result, I upgraded memory from 360 MB to 720 MB as a stop-gap measure. So my question is, how do I go about resolving these outage issues? How do I know if my website's need for more memory is due to poor code or if it's part of the website's natural growth? What's the most efficient way to determine which PHP scripts consume the most memory?

    Read the article

  • Stringification of a macro value

    - by SF.
    I faced a problem - I need to use a macro value both as string and as integer. #define RECORDS_PER_PAGE 10 /*... */ #define REQUEST_RECORDS \ "SELECT Fields FROM Table WHERE Conditions" \ " OFFSET %d * " #RECORDS_PER_PAGE \ " LIMIT " #RECORDS_PER_PAGE ";" char result_buffer[RECORDS_PER_PAGE][MAX_RECORD_LEN]; /* ...and some more uses of RECORDS_PER_PAGE, elsewhere... */ This fails with a message about "stray #", and even if it worked, I guess I'd get the macro names stringified, not the values. Of course I can feed the values to the final method ( "LIMIT %d ", page*RECORDS_PER_PAGE ) but it's neither pretty nor efficient. It's times like this when I wish the preprocessor didn't treat strings in a special way and would process their content just like normal code. For now, I cludged it with #define RECORDS_PER_PAGE_TXT "10" but understandably, I'm not happy about it. How to get it right?

    Read the article

  • JS Framework that doesn't use CSS selectors?

    - by RoToRa
    A thing that I noticed about most JavaScript frameworks is that the most common way to find/access the DOM elements is to use CSS selectors. However this usually requires the framework to include a CSS selector parser, because they need to support selectors, that the browser natively doesn't, foremost the frameworks own proprietary extensions. I would think that these parsers are large and slow. Wouldn't it be more efficient to have something that doesn't require a parser, such a chained method calls? Some like: id("example").children().class("test").hasAttribute("href") instead of $("#example > .test[href]") Are there any frameworks around that do something like this? And how do they compare with jQuery and friends in regard to performance and size? EDIT: You can consider this a theoretical discussion topic. I don't plan to use anything other than jQuery in any practical projects in near furure. I was just wondering why there aren't any other, possibly better approaches.

    Read the article

  • C# GroupJoin effectiveness

    - by bsnote
    without using GroupJoin: var playersDictionary = players.ToDictionary(player => player.Id, element => new PlayerDto { Rounds = new List<RoundDto>() }); foreach (var round in rounds) { PlayerDto playerDto; playersDictionary.TryGetValue(round.PlayerId, out playerDto); if (playerDto != null) { playerDto.Rounds.Add(new RoundDto { }); } } var playerDtoItems = playersDictionary.Values; using GroupJoin: var playerDtoItems = from player in players join round in rounds on player.Id equals round.PlayerId into playerRounds select new PlayerDto { Rounds = playerRounds.Select(playerRound => new RoundDto {}) }; Which of these two pieces is more efficient?

    Read the article

  • Is there a single query that can update a "sequence number" across multiple groups?

    - by Drarok
    Given a table like below, is there a single-query way to update the table from this: | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | NULL | | 2 | 1 | 2010-04-27 | NULL | | 3 | 2 | 2010-04-28 | NULL | | 4 | 3 | 2010-04-28 | NULL | To this (note that created_at is used for ordering, and sequence is "grouped" by type_id): | id | type_id | created_at | sequence | |----|---------|------------|----------| | 1 | 1 | 2010-04-26 | 1 | | 2 | 1 | 2010-04-27 | 2 | | 3 | 2 | 2010-04-28 | 1 | | 4 | 3 | 2010-04-28 | 1 | I've seen some code before that used an @ variable like the following, that I thought might work: SET @seq = 0; UPDATE `log` SET `sequence` = @seq := @seq + 1 ORDER BY `created_at`; But that obviously doesn't reset the sequence to 1 for each type_id. If there's no single-query way to do this, what's the most efficient way? Data in this table may be deleted, so I'm planning to run a stored procedure after the user is done editing to re-sequence the table.

    Read the article

  • Mysql search design

    - by neil
    I'm designing a mysql database, and i'd like some input on an efficient way to store blog/article data for searching. Right now, I've made a separate column that stores the content to be searched - no duplicate words, no words shorter than four letters, and no words that are too common. So, essentially, it's a list of keywords from the original article. Also searched would be a list of tags, and the title field. I'm not quite sure how mysql indexes fulltext columns, so would storing the data like that be ineffective, or redundant somehow? A lot of the articles are on the same topic, so would the score be hurt by so many of the rows having similar keywords? Also, for this project, solutions like sphinx, lucene or google custom seach can't be used -- only php & mysql. Thanks!

    Read the article

  • efficiently compare two BitArrays of the same length

    - by BobTurbo
    How would I do this? I am trying to count when both arrays have the same value of TRUE/1 at the same index. As you can see, my code has multiple bitarrays and is looping through each one and comparing them with a comparisonArray with another loop. It doesn't seem to be very efficient and I need it to be. foreach (bitArrayTuple in bitarryList) { for (int i = 0; i < arrayLength; i++) if (bArrayTuple.Item2[i] && comparisonArray[i]) bitArrayTuple.Item1++; } where Item1 is the count and Item2 is a bitarray.

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • Collection type generated by for with yield

    - by Jesper
    When I evaluate a for in Scala, I get an immutable IndexedSeq (a collection with array-like performance characteristics, such as efficient random access): scala> val s = for (i <- 0 to 9) yield math.random + i s: scala.collection.immutable.IndexedSeq[Double] = Vector(0.6127056766832756, 1.7137598183155291, ... Does a for with a yield always return an IndexedSeq, or can it also return some other type of collection class (a LinearSeq, for example)? If it can also return something else, then what determines the return type, and how can I influence it? I'm using Scala 2.8.0.RC3.

    Read the article

  • Best strategy for HTML parcial rendering based on multiple dropdown values

    - by pv2008
    I have a View that renders something like this: "Item 1" and "Item 2" are <tr> elements from a table. After the user change "Value 1" or "Value 2" I would like to call a Controller and put the result (some HTML snippet) in the div marked as "Result of...". I have some vague notions of JQuery. I know how to bind to the onchage event of the Select element, and call the $.ajax() function, for example. But I wonder if this can be achieved in a more efficient way in ASP.NET MVC2.

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • How can I call an executable to run on a separate machine within a program on my own machine (win xp

    - by Mr. H.
    My objective is to write a program which will call another executable on a separate computer(all with win xp) with parameters determined at run-time, then repeat for several more computers, and then collect the results. In short, I'm working on a grid-computing project. The algorithm itself being used is already coded in FORTRAN, but we are looking for an efficient way to run it on many computers at once. I suppose one way to accomplish this would be to upload a script to each computer and then run said script on each computer, all automatically and dependent on my own parameters. But how can I write a program which will write to, upload, and run a script on a separate computer? I had considered GridGain, but the algorithm is already coded and in a different language, so that is ruled out. My current guess at accomplishing this task is using Expect (wiki/Expect), but I have no knowledge of the tool. Any advice appreciated.

    Read the article

  • Efficiently Reshaping/Reordering Numpy Array to Properly Ordered Tiles (Image)

    - by Phelix
    I would like to be able to somehow reorder a numpy array for efficient processing of tiles. what I got: >>> A = np.array([[1,2],[3,4]]).repeat(2,0).repeat(2,1) >>> A # image like array array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) >>> A.reshape(2,2,4) array([[[1, 1, 2, 2], [1, 1, 2, 2]], [[3, 3, 4, 4], [3, 3, 4, 4]]]) what I want: X >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [4, 4, 4, 4]]]) to be able to do something like: >>> X[X.sum(2)>12] -= 1 >>> X array([[[1, 1, 1, 1], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3]]]) Is this possible without a slow python loop? Bonus: Conversion back from X to A Edit: How can I get X from A?

    Read the article

  • what good orm api will work well with scala or erlang

    - by Emotu Balogun
    I'm considering taking up scala programming but i'm really concerned about what will become of my ORM based applications. I currently use hibernate as my ORM and i find it a really reliable tool. I'd like to know if there's any ORM tool as efficient but written in scala, or will hibernate work seamlessly with it. i don't want to have to start writing endless sql queries again (like the days of JDBC). I also have the same thought about erlang. is there a good orm out there for erlang?? and can i use erlang with other DBMS like oracle and mysql with ORM

    Read the article

  • Is there a way to shift the equation numbering one tab space from the right margin (shift towards le

    - by Murari
    I have been formatting my dissertation and one little problem is stucking me up. I used the following code to typeset an equation \begin{align} & R=\frac{P^2}{P+S'} \label{eqn:SCS}\\ &\mbox {where} \quad \mbox R = \mbox {Watershed Runoff} \notag\\ &\hspace{0.63in} \mbox P = \mbox{Rainfall} \notag\\ &\hspace{0.63in} \mbox S' = \mbox{Storage in the watershed $=\frac{1000}{CN}-10$ }\notag \end{align} My output requirement is such that: The equation should begin one tab space from the left margin The equation number should end at one tab space from the right margin With the above code, I have the equation begin at the right place but not the numbering. Any help will be extremely appreciated. Thanks MP

    Read the article

  • Image expire time

    - by Jens
    The google page speed tool recommends me to set 'Expires' headers for images etc. But what is the most efficient way to set an Expires header for an image? In now redirect all image requests to an imagehandler.php using htaccess: /* HTTP/1.1 404 Not Found, HTTP/1.1 400 Bad Request and content type detection stuff ... */ header( "Content-Type: " . $content_type ); header( "Cache-Control: public" ); header( "Last-Modified: ".gmdate("D, d M Y H:i:s", filemtime($path))." GMT"); header( "Expires: ". date("r",time() + (60*60*24*30))); readfile( $path ); But of course this adds extra loading time for my images on first request, and I was wondering if there was a better solution for this.

    Read the article

  • Any tips of how to handle hierarchial trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >