Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 265/361 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Are MongoDB and CouchDB perfect substitutes?

    - by raoulsson
    I haven't got my hands dirty yet with neither CouchDB nor MongoDB but I would like to do so soon... I also have read a bit about both systems and it looks to me like they cover the same cases... Or am I missing a key distinguishing feature? I would like to use a document based storage instead of a traditional RDBMS in my next project. I also need the datastore to handle large binary objects (images and videos) automatically replicate itself to physically separate nodes rendering the need of an additional RDBMS superfluous Are both equally well suited for these requirements? Thanks!

    Read the article

  • How can an SQL query return data from multiple tables

    - by Fluffeh
    I would like to know how to get data from multiple tables in my database, what types of methods are there to do this, what are joins and unions and how are they different from one another? When should I use each one compared to the others? I am planning to use this in my (for example - PHP) application, but don't want to run multiple queries against the database, what options do I have to get data from multiple tables in a single query? Note: I am writing this as I would like to be able to link to a well written guide on the numerous questions that I constantly come across in the PHP queue, so I can link to this for further detail when I post an answer. The answers cover off the following: Part 1 - Joins and Unions Part 2 - Subqueries Part 3 - Tricks and Efficient Code

    Read the article

  • Pre loaded database on iPhone?

    - by Julian
    Hi, I have recently developed an app using core data as the storage db. The app allowed the user to read and write to the db. I am now developing a new app which the user doesnt need to write anything to the db, instead the app just needs to read the data. The data has relationships etc so cannot just use a plist or something similar. My question is should I use core data for such a requirement and if so how would i go about entering the data and then releasing the app. Would I have to code the data entry which would populate the db then remove all this code (as I dont want the database to repopulate every time the user opens the app)?? Is there a way to create a core data model using sql commands as with sqlite ie insert into..... etc? Any ideas/thoughts would be very helpful. Many thanks Jules

    Read the article

  • efficiently compare two BitArrays of the same length

    - by BobTurbo
    How would I do this? I am trying to count when both arrays have the same value of TRUE/1 at the same index. As you can see, my code has multiple bitarrays and is looping through each one and comparing them with a comparisonArray with another loop. It doesn't seem to be very efficient and I need it to be. foreach (bitArrayTuple in bitarryList) { for (int i = 0; i < arrayLength; i++) if (bArrayTuple.Item2[i] && comparisonArray[i]) bitArrayTuple.Item1++; } where Item1 is the count and Item2 is a bitarray.

    Read the article

  • Rails - Paperclip, getting width and height of image in model

    - by Corey Tenold
    Trying to get the width and height of the uploaded image while still in the model on the initial save. Any way to do this? Here's the snippet of code I've been testing with from my model. Of course it fails on "instance.photo_width". has_attached_file :photo, :styles => { :original => "634x471>", :thumb => Proc.new { |instance| ratio = instance.photo_width/instance.photo_height min_width = 142 min_height = 119 if ratio > 1 final_height = min_height final_width = final_height * ratio else final_width = min_width final_height = final_width * ratio end "#{final_width}x#{final_height}" } }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'foo_bucket' So I'm basically trying to do this to get a custom thumbnail width and height based on the initial image dimensions. Any ideas?

    Read the article

  • Adding new "columns" to csv data file in Tcl

    - by George
    Hi All, I am dealing with a "large" measurement data, approximately 30K key-value pairs. The measurements have number of iterations. After each iteration a datafile (non-csv) with 30K kay-value pairs is created. I want to somehow creata a csv file of form: Key1,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... Key2,value of iteration1,value of iteration2,... ... Now, I was wondering about efficient way of adding each iteration mesurement data as a columns to csv file in Tcl. So, far it seems that in either case I will need to load whole csv file into some variable(array/list) and work on each element by adding new measurement data. This seems somewhat inefficient. Is there another way, perhaps?

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • How do I know if my PHP application is using too much memory?

    - by John
    I'm working on a PHP web application that let's users network with each other, book events, message etc... I launched it a few months ago and at the moment, there's only about 100 users. I set up the application on a VPS with ubuntu 9.10, apache 2, mysql 5 and php 5. I had 360 Mb of RAM, but upgraded to 720 MB a few minutes ago. Lately, my web application has been experiencing outages due to excessive memory usage. From what I can tell in error logs, it seems the server automatically kills apache processes that consume too much memory. As a result, I upgraded memory from 360 MB to 720 MB as a stop-gap measure. So my question is, how do I go about resolving these outage issues? How do I know if my website's need for more memory is due to poor code or if it's part of the website's natural growth? What's the most efficient way to determine which PHP scripts consume the most memory?

    Read the article

  • Hudson: where to download file and stop specific builds running ?

    - by Kim Jong Woo
    I have a file that is generated inside (hudson server) /var/lib/hudson/jobs/jobtitle/1/out.txt I need to fetch this file, but doing a GET request for http://myhudson:8090/job/jobtitle/1/out.txt doesn't actually locate the file. Basically, I have another box that will grab this file from the hudson server. This box will make the out.txt file available for download. Another challenge is the build number directories. How would I be able to use the hudson API to stop or delete the specific builds running ? I am forced to do iterate through all build numbers to send STOP or DELETE api call in php using wget to do the REST API call. This is not very efficient. for ($i=0; $i < 3000; $i++){ exec('wget -O /dev/null "http://myhudson:8090/job/' . 'jobtitle' . '/$i/stop"'); }

    Read the article

  • Best Design Pattern to Implement while Mapping Actions in MVC

    - by FidEliO
    What could be the best practices of writing the following case: We have a controller which based on what paths users take, take different actions. For example: if user chooses the path /path1/hello it will say hello. If a user chooses /path1/bye?name="Philipp" it will invoke sayGoodBye() and etc. I have written a switch statement inside the controller which is simple, however IMO not efficient. What are the best way to implement this, considering that paths are generally String. private void takeAction() { switch (path[1]) { case "hello": //sayHello(); break; case "bye": //sayBye(); break; case "case3": //Blah(); break; ... } }

    Read the article

  • 2D Engine scrolling on OpenGL via hardware?

    - by drudru
    hi, I'm using OpenGL as the bottom end for a 2D tiling engine. When everything is 2D, it is simple to optimize certain issues. For example, scrolling. If I know a certain section of the screen needs to scroll off the bottom, then I can just blit over that portion. I'm evening moving more than 1 pixel at a time. Without explicit hardware support (think old nintendo hw), this requires a lot of pixel writes. An on chip bitblt would be the next best thing. Essentially, I'm looking at how I can optimize my GL calls to use VRAM texture renders as efficient hardware blits. Is it possible to have GL scroll the framebuffer, or should I just resign myself to double-buffering and re-rendering an entire scene for each frame? Thx

    Read the article

  • JS Framework that doesn't use CSS selectors?

    - by RoToRa
    A thing that I noticed about most JavaScript frameworks is that the most common way to find/access the DOM elements is to use CSS selectors. However this usually requires the framework to include a CSS selector parser, because they need to support selectors, that the browser natively doesn't, foremost the frameworks own proprietary extensions. I would think that these parsers are large and slow. Wouldn't it be more efficient to have something that doesn't require a parser, such a chained method calls? Some like: id("example").children().class("test").hasAttribute("href") instead of $("#example > .test[href]") Are there any frameworks around that do something like this? And how do they compare with jQuery and friends in regard to performance and size? EDIT: You can consider this a theoretical discussion topic. I don't plan to use anything other than jQuery in any practical projects in near furure. I was just wondering why there aren't any other, possibly better approaches.

    Read the article

  • GQL Query with __key__ in List of KEYs

    - by bossylobster
    In the GQL reference [1], it is encouraged to use the IN keyword with a list of values, and to construct a Key from hand the GQL query SELECT * FROM MyModel WHERE __key__ = KEY('MyModel', 'my_model_key') will succeed. However, using the code you would expect to work: SELECT * FROM MyModel WHERE __key__ IN (KEY('MyModel', 'my_model_key1'), KEY('MyModel', 'my_model_key2')) in the Datastore Viewer, there is a complaint of "Invalid GQL query string." What is the correct way to format such a query? [1] http://code.google.com/appengine/docs/python/datastore/gqlreference.html PS I know there are more efficient ways to do this in Python (without constructing a GQL query) and using the remote_api, but each call to the remote_api counts against quota. In an environment where quota is not (necessarily) free, quick and dirty queries are very helpful.

    Read the article

  • Sorting 2 arrays that have been added together

    - by tyler
    In my app, users can create galleries that their work may or may not be in. Users have and belong to many Galleries, and each gallery has a 'creator' that is designated by the gallery's user_id field. So to get the 5 latest galleries a user is in, I can do something like: included_in = @user.galleries.order('created_at DESC').uniq.first(5) # SELECT DISTINCT "galleries".* FROM "galleries" INNER JOIN "galleries_users" ON "galleries"."id" = "galleries_users"."gallery_id" WHERE "galleries_users"."user_id" = 10 ORDER BY created_at DESC LIMIT 5 and to get the 5 latest galleries they've created, I can do: created = Gallery.where(user_id: id).order('created_at DESC').uniq.first(5) # SELECT DISTINCT "galleries".* FROM "galleries" WHERE "galleries"."user_id" = 10 ORDER BY created_at DESC LIMIT 5 I want to display these two together, so that it's the 5 latest galleries that they've created OR they're in. Something like the equivalent of: (included_in + created).order('created_at DESC').uniq.first(5) Does anyone know how to construct an efficient query or post-query loop that does this?

    Read the article

  • How to scale thumbnail to fit depending on tall or wide photo - Attachment_fu

    - by adamwstl
    I'm using attachment_fu and Rmagick to create thumbnails after upload. The thumbnail is a fixed 135 x 135 px and I'm currently forcing the width to 135px on all photos. The problem is that if it's a wide and fat photo is has to stretch the height awkwardly. Current Attachment_fu setup class PhotoImage < Image belongs_to :photo has_attachment :content_type => :image, :size => 0..5.megabytes, :storage => :s3, :resize_to => '650x>', :thumbnails => { :thumbnail => '135x>' }#:geometry => 'x50' } validates_as_attachment end Here's what I'm trying to do: Thanks

    Read the article

  • Any tips of how to handle hierarchial trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • How to solve recursion relations in mathematica efficiently?

    - by Qiang Li
    I have a recursion to solve for. f(m,n)=Sum[f[m - 1, n - 1 - i] + f[m - 3, n - 5 - i], {i, 2, n - 2*m + 2}] + f[m - 1, n - 3] + f[m - 3, n - 7] f(0,n)=1, f(1,n)=n However, the following mma code is very inefficient f[m_, n_] := Module[{}, If[m < 0, Return[0];]; If[m == 0, Return[1];]; If[m == 1, Return[n];]; Return[Sum[f[m - 1, n - 1 - i] + f[m - 3, n - 5 - i], {i, 2, n - 2*m + 2}] + f[m - 1, n - 3] + f[m - 3, n - 7]];] It takes unbearably long to compute f[40,20]. Could anyone please suggest an efficient way of doing this? Many thanks!

    Read the article

  • How to very efficiently assign lat/long to city boundary described by shape ?

    - by watcherFR
    I have a huge shapefile of 36.000 non-overlapping polygones (city boundaries). I want to easily determine the polygone into which a given lat/long falls. What would the best way given that it must be extremely computationaly efficient ? I was thinking of creating a lookup table (tilex,tiley,polygone_id) where tilex and tiley are tile identifiers at zoom levels 21 or 22. Yes, the lack of precision of using tile numbers and a planar projection is acceptable in my application. I would rather not use postgres's GIS extension and am fine with a program that will run for 2 days to generate all the INSERT statements.

    Read the article

  • trouble setting up TreeViews in pygtk

    - by Chris H
    I've got some code in a class that extends gtk.TreeView, and this is the init method. I want to create a tree view that has 3 columns. A toggle button, a label, and a drop down box that the user can type stuff into. The code below works, except that the toggle button doesn't react to mouse clicks and the label and the ComboEntry aren't drawn. (So I guess you can say it doesn't work). I can add rows just fine however. #make storage enable/disable label user entry self.tv_store = gtk.TreeStore(gtk.ToggleButton, str, gtk.ComboBoxEntry) #make widget gtk.TreeView.__init__(self, self.tv_store) #make renderers self.buttonRenderer = gtk.CellRendererToggle() self.labelRenderer = gtk.CellRendererText() self.entryRenderer = gtk.CellRendererCombo() #make columns self.columnButton = gtk.TreeViewColumn('Enabled') self.columnButton.pack_start(self.buttonRenderer, False) self.columnLabel = gtk.TreeViewColumn('Label') self.columnLabel.pack_start(self.labelRenderer, False) self.columnEntry = gtk.TreeViewColumn('Data') self.columnEntry.pack_start(self.entryRenderer, True) self.append_column(self.columnButton) self.append_column(self.columnLabel) self.append_column(self.columnEntry) self.tmpButton = gtk.ToggleButton('example') self.tmpCombo = gtk.ComboBoxEntry(None) self.tv_store.insert(None, 0, [self.tmpButton, 'example label', self.tmpCombo]) thanks.

    Read the article

  • Collection type generated by for with yield

    - by Jesper
    When I evaluate a for in Scala, I get an immutable IndexedSeq (a collection with array-like performance characteristics, such as efficient random access): scala> val s = for (i <- 0 to 9) yield math.random + i s: scala.collection.immutable.IndexedSeq[Double] = Vector(0.6127056766832756, 1.7137598183155291, ... Does a for with a yield always return an IndexedSeq, or can it also return some other type of collection class (a LinearSeq, for example)? If it can also return something else, then what determines the return type, and how can I influence it? I'm using Scala 2.8.0.RC3.

    Read the article

  • Image expire time

    - by Jens
    The google page speed tool recommends me to set 'Expires' headers for images etc. But what is the most efficient way to set an Expires header for an image? In now redirect all image requests to an imagehandler.php using htaccess: /* HTTP/1.1 404 Not Found, HTTP/1.1 400 Bad Request and content type detection stuff ... */ header( "Content-Type: " . $content_type ); header( "Cache-Control: public" ); header( "Last-Modified: ".gmdate("D, d M Y H:i:s", filemtime($path))." GMT"); header( "Expires: ". date("r",time() + (60*60*24*30))); readfile( $path ); But of course this adds extra loading time for my images on first request, and I was wondering if there was a better solution for this.

    Read the article

  • How to check if the sum of some records equals the difference between two other records in t-sql?

    - by Dan Appleyard
    I have a view that contains bank account activity. ACCOUNT BALANCE_ROW AMOUNT SORT_ORDER 111 1 0.00 1 111 0 10.00 2 111 0 -2.50 3 111 1 7.50 4 222 1 100.00 5 222 0 25.00 6 222 1 125.00 7 ACCOUNT = account number BALANCE_ROW = either starting or ending balance would be 1, otherwise 0 AMOUNT = the amount SORT_ORDER = simple order to return the records in the order of start balance, activity, and end balance I need to figure out a way to see if the sum of the non balance_row rows equal the difference between the ending balance and the starting balance. The result for each account (1 for yes, 0 for no) would be simply added to the resulting result set. Example: Account 111 had a starting balance of 0.00. There were two account activity records of 10.00 and -2.5. That resulted in the ending balance of 7.50. I've been playing around with temp tables, but I was not sure if there is a more efficient way of accomplishing this. Thanks for any input you may have!

    Read the article

  • Mysql search design

    - by neil
    I'm designing a mysql database, and i'd like some input on an efficient way to store blog/article data for searching. Right now, I've made a separate column that stores the content to be searched - no duplicate words, no words shorter than four letters, and no words that are too common. So, essentially, it's a list of keywords from the original article. Also searched would be a list of tags, and the title field. I'm not quite sure how mysql indexes fulltext columns, so would storing the data like that be ineffective, or redundant somehow? A lot of the articles are on the same topic, so would the score be hurt by so many of the rows having similar keywords? Also, for this project, solutions like sphinx, lucene or google custom seach can't be used -- only php & mysql. Thanks!

    Read the article

  • Overhead of serving pages - JSPs vs. PHP vs. ASPXs vs. C

    - by John Shedletsky
    I am interested in writing my own internet ad server. I want to serve billions of impressions with as little hardware possible. Which server-side technologies are best suited for this task? I am asking about the relative overhead of serving my ad pages as either pages rendered by PHP, or Java, or .net, or coding Http responses directly in C and writing some multi-socket IO monster to serve requests (I assume this one wins, but if my assumption is wrong, that would actually be most interesting). Obviously all the most efficient optimizations are done at the algorithm level, but I figure there has got to be some speed differences at the end of the day that makes one method of serving ads better than another. How much overhead does something like apache or IIS introduce? There's got to be a ton of extra junk in there I don't need. At some point I guess this is more a question of which platform/language combo is best suited - please excuse the in-adroitly posed question, hopefully you understand what I am trying to get at.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >