Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 121/151 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Updating a Minimum spanning tree when a new edge is inserted

    - by Lynette
    Hello, I've been presented the following problem in University: Let G = (V, E) be an (undirected) graph with costs ce = 0 on the edges e € E. Assume you are given a minimum-cost spanning tree T in G. Now assume that a new edge is added to G, connecting two nodes v, tv € V with cost c. a) Give an efficient algorithm to test if T remains the minimum-cost spanning tree with the new edge added to G (but not to the tree T). Make your algorithm run in time O(|E|). Can you do it in O(|V|) time? Please note any assumptions you make about what data structure is used to represent the tree T and the graph G. b)Suppose T is no longer the minimum-cost spanning tree. Give a linear-time algorithm (time O(|E|)) to update the tree T to the new minimum-cost spanning tree. This is the solution I found: Let e1=(a,b) the new edge added Find in T the shortest path from a to b (BFS) if e1 is the most expensive edge in the cycle then T remains the MST else T is not the MST It seems to work but i can easily make this run in O(|V|) time, while the problem asks O(|E|) time. Am i missing something? By the way we are authorized to ask for help from anyone so I'm not cheating :D Thanks in advance

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • Implementing parts of rfc4226 (HOTP) in mysql

    - by Moose Morals
    Like the title says, I'm trying to implement the programmatic parts of RFC4226 "HOTP: An HMAC-Based One-Time Password Algorithm" in SQL. I think I've got a version that works (in that for a small test sample, it produces the same result as the Java version in the code), but it contains a nested pair of hex(unhex()) calls, which I feel can be done better. I am constrained by a) needing to do this algorithm, and b) needing to do it in mysql, otherwise I'm happy to look at other ways of doing this. What I've got so far: -- From the inside out... -- Concatinate the users secret, and the number of time its been used -- find the SHA1 hash of that string -- Turn a 40 byte hex encoding into a 20 byte binary string -- keep the first 4 bytes -- turn those back into a hex represnetation -- convert that into an integer -- Throw away the most-significant bit (solves signed/unsigned problems) -- Truncate to 6 digits -- store into otp -- from the otpsecrets table select (conv(hex(substr(unhex(sha1(concat(secret, uses))), 1, 4)), 16, 10) & 0x7fffffff) % 1000000 into otp from otpsecrets; Is there a better (more efficient) way of doing this?

    Read the article

  • searching XML documents using php

    - by dbomb101
    I am trying to make a search function using the combination of DOM, PHP and XML. I got something up and running but the problem is that my search function will only accept exact terms, on top of this am wondering if the method I picked the most efficient $searchTerm = "Lupe"; $doc = new DOMDocument(); foreach (file('musicInformation.xml')as $node) { $xmlString .= trim($node); } $doc->loadXML($xmlString); $records = $doc->documentElement->childNodes; $records = $doc->getElementsByTagName("musicdetails"); foreach( $records as $record ) { $artistnames = $record->getElementsByTagName("artistname"); $artistname = $artistnames->item(0)->nodeValue; $recordnames = $record->getElementsByTagName("recordname"); $recordname = $recordnames->item(0)->nodeValue; $recordtypes = $record->getElementsByTagName("recrodtype"); $recordtype = $recordtypes->item(0)->nodeValue; $formats = $record->getElementsByTagName("format"); $format = $formats->item(0)->nodeValue; $prices = $record->getElementsByTagName("price"); $price = $prices->item(0)->nodeValue; if($searchTerm == $artistname|| $searchTerm == $recordname || $searchTerm == $recordtype ||$searchTerm == $format || $searchTerm == $price) { echo "$artistname - $recordname - $recordtype - $format -$price\n"; }

    Read the article

  • Uncompress OpenOffice files for better storage in version control

    - by Craig McQueen
    I've heard discussion about how OpenOffice (ODF) files are compressed zip files of XML and other data. So making a tiny change to the file can potentially totally change the data, so delta compression doesn't work well in version control systems. I've done basic testing on an OpenOffice file, unzipping it and then rezipping it with zero compression. I used the Linux zip utility for my testing. OpenOffice will still happily open it. So I'm wondering if it's worth developing a small utility to run on ODF files each time just before I commit to version control. Any thoughts on this idea? Possible better alternatives? Secondly, what would be a good and robust way to implement this little utility? Bash shell that calls zip (probably Linux only)? Python? Any gotchas you can think of? Obviously I don't want to accidentally mangle a file, and there are several ways that could happen. Possible gotchas I can think of: Insufficient disk space Some other permissions issue that prevents writing the file or temporary files ODF document is encrypted (probably should just leave these alone; the encryption probably also causes large file changes and thus prevents efficient delta compression)

    Read the article

  • Good Learning Method for Objective-C?

    - by Josh Kahane
    Hi I know this must be asked a millions times and can't be easy to answer as there is o definitive method, but any help would be appreciated, thanks. I have been playing around with all sorts of things in Xcode and with Objective-C, however I can't seem to find a good way of learning things in an efficient way. I have bought the book 'Programming in Objective-C 2.0' and its great but just lays down the basics it seems. I want to learn in the 2D game development direction, then of course 3D after nailing that, if thats the right thing to do? I am 17 currently in year 13, last year of school/A Levels and am almost definitely taking a gap year. Any good, well known reputable courses online or offline (real world)? This is my first programming language, and I am absolutely serious about learning this. One last question, is when learning things online, I have in the past started building a feature and learning a certain aspect in programming only to find out after adding more its slows down the app or its to inefficient. Is the key to use a certain method in a certain situation (being os many ways to do the same thing) or use any of those methods and refine it in your app to make it run smoothly? Sorry, its hard for me to know when I have little experience, thus far. Sorry for rambling on! I would appreciate any help, thank you!

    Read the article

  • MS Access: Why is ADODB.Recordset.BatchUpdate so much slower than Application.ImportXML?

    - by apenwarr
    I'm trying to run the code below to insert a whole lot of records (from a file with a weird file format) into my Access 2003 database from VBA. After many, many experiments, this code is the fastest I've been able to come up with: it does 10000 records in about 15 seconds on my machine. At least 14.5 of those seconds (ie. almost all the time) is in the single call to UpdateBatch. I've read elsewhere that the JET engine doesn't support UpdateBatch. So maybe there's a better way to do it. Now, I would just think the JET engine is plain slow, but that can't be it. After generating the 'testy' table with the code below, I right clicked it, picked Export, and saved it as XML. Then I right clicked, picked Import, and reloaded the XML. Total time to import the XML file? Less than one second, ie. at least 15x faster. Surely there's an efficient way to insert data into Access that doesn't require writing a temp file? Sub TestBatchUpdate() CurrentDb.Execute "create table testy (x int, y int)" Dim rs As New ADODB.Recordset rs.CursorLocation = adUseServer rs.Open "testy", CurrentProject.AccessConnection, _ adOpenStatic, adLockBatchOptimistic, adCmdTableDirect Dim n, v n = Array(0, 1) v = Array(50, 55) Debug.Print "starting loop", Time For i = 1 To 10000 rs.AddNew n, v Next i Debug.Print "done loop", Time rs.UpdateBatch Debug.Print "done update", Time CurrentDb.Execute "drop table testy" End Sub I would be willing to resort to C/C++ if there's some API that would let me do fast inserts that way. But I can't seem to find it. It can't be that Application.ImportXML is using undocumented APIs, can it?

    Read the article

  • Atomic Instructions and Variable Update visibility

    - by dsimcha
    On most common platforms (the most important being x86; I understand that some platforms have extremely difficult memory models that provide almost no guarantees useful for multithreading, but I don't care about rare counter-examples), is the following code safe? Thread 1: someVariable = doStuff(); atomicSet(stuffDoneFlag, 1); Thread 2: while(!atomicRead(stuffDoneFlag)) {} // Wait for stuffDoneFlag to be set. doMoreStuff(someVariable); Assuming standard, reasonable implementations of atomic ops: Is Thread 1's assignment to someVariable guaranteed to complete before atomicSet() is called? Is Thread 2 guaranteed to see the assignment to someVariable before calling doMoreStuff() provided it reads stuffDoneFlag atomically? Edits: The implementation of atomic ops I'm using contains the x86 LOCK instruction in each operation, if that helps. Assume stuffDoneFlag is properly cleared somehow. How isn't important. This is a very simplified example. I created it this way so that you wouldn't have to understand the whole context of the problem to answer it. I know it's not efficient.

    Read the article

  • My First F# program

    - by sudaly
    Hi I just finish writing my first F# program. Functionality wise the code works the way I wanted, but not sure if the code is efficient. I would much appreciate if someone could review the code for me and point out the areas where the code can be improved. Thanks Sudaly open System open System.IO open System.IO.Pipes open System.Text open System.Collections.Generic open System.Runtime.Serialization [<DataContract>] type Quote = { [<field: DataMember(Name="securityIdentifier") >] RicCode:string [<field: DataMember(Name="madeOn") >] MadeOn:DateTime [<field: DataMember(Name="closePrice") >] Price:float } let m_cache = new Dictionary<string, Quote>() let ParseQuoteString (quoteString:string) = let data = Encoding.Unicode.GetBytes(quoteString) let stream = new MemoryStream() stream.Write(data, 0, data.Length); stream.Position <- 0L let ser = Json.DataContractJsonSerializer(typeof<Quote array>) let results:Quote array = ser.ReadObject(stream) :?> Quote array results let RefreshCache quoteList = m_cache.Clear() quoteList |> Array.iter(fun result->m_cache.Add(result.RicCode, result)) let EstablishConnection() = let pipeServer = new NamedPipeServerStream("testpipe", PipeDirection.InOut, 4) let mutable sr = null printfn "[F#] NamedPipeServerStream thread created, Wait for a client to connect" pipeServer.WaitForConnection() printfn "[F#] Client connected." try // Stream for the request. sr <- new StreamReader(pipeServer) with | _ as e -> printfn "[F#]ERROR: %s" e.Message sr while true do let sr = EstablishConnection() // Read request from the stream. printfn "[F#] Ready to Receive data" sr.ReadLine() |> ParseQuoteString |> RefreshCache printfn "[F#]Quot Size, %d" m_cache.Count let quot = m_cache.["MSFT.OQ"] printfn "[F#]RIC: %s" quot.RicCode printfn "[F#]MadeOn: %s" (String.Format("{0:T}",quot.MadeOn)) printfn "[F#]Price: %f" quot.Price

    Read the article

  • Daily, Weekly and Monthly Page View Counter

    - by Jens Fahnenbruck
    I'm building a website with user generated content. On the home page I want to show a list of all created items, and I want to be able to sort them by a view counter. That's sound easy, but I want multiple counters. I want to know which was the most visited item in the last day, last week or last months or overall. My first Idea was to create 4 counter columns in the item's DB-Table. One for each of daily, weekly, monthly and overall, and the create a cron job, that clears the daily counter every 24 hours, the weekly counter every 7 days and so on. But my problem with this is, what happens if I want to know which was the most viewed item of the week, just after the weekly counter got cleared? What I need is an efficient way to create a continous counter, which got reduced for every page view that is too old, and increased for every new page view. Right now I'm thinking of a solution with the redis server, but I have no solution yet. I'm just looking for a general idea here, but FYI I'm developing this application in Ruby on Rails.

    Read the article

  • Is it immoral to write crappy code even if readability and correctness is not a requirement?

    - by mafutrct
    There are cases when crappy (i.e. unreadable and buggy) code is not much of a problem. For instance, imagine you need to generate a big text file that mostly follows a simple pattern with a few very complex exceptions. What do you do? You quickly write a simple algorithm and insert the exceptional bits in the output manually to save 4 hours. The code is unreadable, and the output is flawed, but it's still the correct way since it is way faster. But let's get this straight: I hate bad code. I've had to read and work with code that caused my stomach to hurt. I care a lot about good code. And actually, I caught myself thinking that it is immoral to write bad code even though the dirty approach is sometimes superior. I was surprised by myself and found my idea to be very irrational. Did you ever experience this? Should I just get rid of this stupid idea and use the most efficient approach to coding?

    Read the article

  • mysql image disable print download

    - by Vish
    Hi, We use a Flex AIR client and a WAMP server. Tiff images are stored in MySQL. Currently, I can download the image from AIR client and it prompts for a download dialog. Things are fine till this point. We got a new requirement. Requirement is that only some users can print the image which gets downloaded. For other users, they should not be able to print the tiff image. Wondering how to accomplish this. One idea, not sure if its efficient, is to convert the image requested to pdf at the server side, disable print option there(hope there are API's available) and send back the pdf. Please let me know btter ideas. Also, is there a way to prevent file download dialog from popping up everytime the file is requested for download? Can we just get the file stream to the client and manipulate it to open with a particular viewer or write it to pdf... Please help.

    Read the article

  • Drawing to the canvas

    - by Mattl
    I'm writing an android application that draws directly to the canvas on the onDraw event of a View. I'm drawing something that involves drawing each pixel individually, for this I use something like: for (int x = 0; x < xMax; x++) { for (int y = 0; y < yMax; y++){ MyColour = CalculateMyPoint(x, y); canvas.drawPoint(x, y, MyColour); } } The problem here is that this takes a long time to paint as the CalculateMyPoint routine is quite an expensive method. Is there a more efficient way of painting to the canvas, for example should I draw to a bitmap and then paint the whole bitmap to the canvas on the onDraw event? Or maybe evaluate my colours and fill in an array that the onDraw method can use to paint the canvas? Users of my application will be able to change parameters that affect the drawing on the canvas. This is incredibly slow at the moment.

    Read the article

  • jQuery: Inserting li items one by one?

    - by Legend
    I wrote the following (part of a jQuery plugin) to insert a set of items from a JSON object into a <ul> element. ... query: function() { ... $.ajax({ url: fetchURL, type: 'GET', dataType: 'jsonp', timeout: 5000, error: function() { self.html("Network Error"); }, success: function(json) { //Process JSON $.each(json.results, function(i, item) { $("<li></li>") .html(mHTML) .attr('id', "div_li"+i) .attr('class', "divliclass") .prependTo("#" + "div_ul"); $(slotname + "div_li" + i).hide(); $(slotname + "div_li" + i).show("slow") } } }); } }); }, ... Doing this maybe adding the <li> items one by one theoretically but when I load the page, everything shows up instantaneously. Instead, is there an efficient way to make them appear one by one more slowly? I'll explain with a small example: If I have 3 items, this code is making all the 3 items appear instantaneously (at least to my eyes). I want something like 1 fades in, then 2 fades in, then 3 (something like a newsticker perhaps). Anyone has a suggestion?

    Read the article

  • Polymorphic Numerics on .Net and In C#

    - by Bent Rasmussen
    It's a real shame that in .Net there is no polymorphism for numbers, i.e. no INumeric interface that unifies the different kinds of numerical types such as bool, byte, uint, int, etc. In the extreme one would like a complete package of abstract algebra types. Joe Duffy has an article about the issue: http://www.bluebytesoftware.com/blog/CommentView,guid,14b37ade-3110-4596-9d6e-bacdcd75baa8.aspx How would you express this in C#, in order to retrofit it, without having influence over .Net or C#? I have one idea that involves first defining one or more abstract types (interfaces such as INumeric - or more abstract than that) and then defining structs that implement these and wrap types such as int while providing operations that return the new type (e.g. Integer32 : INumeric; where addition would be defined as public Integer32 Add(Integer32 other) { return Return(Value + other.Value); } I am somewhat afraid of the execution speed of this code but at least it is abstract. No operator overloading goodness... Any other ideas? .Net doesn't look like a viable long-term platform if it cannot have this kind of abstraction I think - and be efficient about it. Abstraction is reuse.

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • client-server syncing methodology [theoretical]

    - by Kenneth Ballenegger
    I'm in the progress of building an web-app that syncs with an iOS client. I'm currently tackling trying to figure out how to go about about syncing. I've come up with following two directions: I've got a fairly simple server web-app with a list of items. They are ordered by date modified and as such syncing the order does not matter. One direction I'm considering is to let the client deal with syncing. I've already got an API that lets the client get the data, as well as do certain actions on it, such as update, add or remove single items. I was considering: 1) on each sync asking the server for all items modified since the last successful sync and updating the local records based on what's returned by the server, and 2) building a persistent queue of create / remove / update requests on the client, and keeping them until confirmation by the server. The risk with this approach is that I'm basically asking each side to send changes to the other side, hoping it works smoothly, but risking a diversion at some point. This would probably be more bandwidth-efficient, though. The other direction I was considering was a more traditional model. I would have a "sync" process in which the client would send its whole list to the server (or a subset since last modified sync), the server would update the data on the server (by fixing conflicts by keeping the last modified item, and keeping deleted items with a deleted = 1 field), and the server would return an updated list of items (since last successful sync) which the client would then replace its data with. Thoughts?

    Read the article

  • Opengl Iphone SDK: How to tell if you're touching an object on screen?

    - by TheGambler
    First is my touchesBegan function and then the struct that stores the values for my object. I have an array of these objects and I'm trying to figure out when I touch the screen if I'm touching an object on the screen. I don't know if I need to do this by iterating through all my objects and figure out if I'm touching an object that way or maybe there is an easier more efficient way. How is this usually handled? -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{ [super touchesEnded:touches withEvent:event]; UITouch* touch = ([touches count] == 1 ? [touches anyObject] : nil); CGRect bounds = [self bounds]; CGPoint location = [touch locationInView:self]; location.y = bounds.size.height - location.y; float xTouched = location.x/20 - 8 + ((int)location.x % 20)/20; float yTouched = location.y/20 - 12 + ((int)location.y % 20)/20; } typedef struct object_tag // Create A Structure Called Object { int tex; // Integer Used To Select Our Texture float x; // X Position float y; // Y Position float z; // Z Position float yi; // Y Increase Speed (Fall Speed) float spinz; // Z Axis Spin float spinzi; // Z Axis Spin Speed float flap; // Flapping Triangles :) float fi; // Flap Direction (Increase Value) } object;

    Read the article

  • Mysqli results memory usage

    - by Poe
    Why is the memory consumption in this query continuing to rise as the internal pointer progresses through loop? How to make this more efficient and lean? $link = mysqli_connect(...); $result = mysqli_query($link,$query); // 403,268 rows in result set while ($row = mysqli_fetch_row($result)) { // print time, (get memory usage), -- row number } mysqli_free_result(); mysqli_close($link); /* 06:55:25 (1240336) -- Run query 06:55:26 (39958736) -- Query finished 06:55:26 (39958784) -- Begin loop 06:55:26 (39960688) -- Row 0 06:55:26 (45240712) -- Row 10000 06:55:26 (50520712) -- Row 20000 06:55:26 (55800712) -- Row 30000 06:55:26 (61080712) -- Row 40000 06:55:26 (66360712) -- Row 50000 06:55:26 (71640712) -- Row 60000 06:55:26 (76920712) -- Row 70000 06:55:26 (82200712) -- Row 80000 06:55:26 (87480712) -- Row 90000 06:55:26 (92760712) -- Row 100000 06:55:26 (98040712) -- Row 110000 06:55:26 (103320712) -- Row 120000 06:55:26 (108600712) -- Row 130000 06:55:26 (113880712) -- Row 140000 06:55:26 (119160712) -- Row 150000 06:55:26 (124440712) -- Row 160000 06:55:26 (129720712) -- Row 170000 06:55:27 (135000712) -- Row 180000 06:55:27 (140280712) -- Row 190000 06:55:27 (145560712) -- Row 200000 06:55:27 (150840712) -- Row 210000 06:55:27 (156120712) -- Row 220000 06:55:27 (161400712) -- Row 230000 06:55:27 (166680712) -- Row 240000 06:55:27 (171960712) -- Row 250000 06:55:27 (177240712) -- Row 260000 06:55:27 (182520712) -- Row 270000 06:55:27 (187800712) -- Row 280000 06:55:27 (193080712) -- Row 290000 06:55:27 (198360712) -- Row 300000 06:55:27 (203640712) -- Row 310000 06:55:27 (208920712) -- Row 320000 06:55:27 (214200712) -- Row 330000 06:55:27 (219480712) -- Row 340000 06:55:27 (224760712) -- Row 350000 06:55:27 (230040712) -- Row 360000 06:55:27 (235320712) -- Row 370000 06:55:27 (240600712) -- Row 380000 06:55:27 (245880712) -- Row 390000 06:55:27 (251160712) -- Row 400000 06:55:27 (252884360) -- End loop 06:55:27 (1241264) -- Free */

    Read the article

  • mod_cgi , mod_fastcgi, mod_scgi , mod_wsgi, mod_python, FLUP. I don't know how many more. what is mo

    - by claws
    I recently learnt Python. I liked it. I just wanted to use it for web development. This thought caused all the troubles. But I like these troubles :) Coming from PHP world where there is only one way standardized. I expected the same and searched for python & apache. http://stackoverflow.com/questions/449055/setting-up-python-on-windows-apache says Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. So what is equivalent of mod_php in python? I need little clarification on this one http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, or even Assembly :). So, I guess mod_cgi , mod_fastcgi, mod_scgi are their corresponding apache modules. Right? WSGI is some kind of optimized/improved inshort an efficient version specifically designed for python language only. In order to use this mod_wsgi is a way to go. right? This leaves out mod_python. What is it then? Apache - mod_fastcgi - FLUP (via CGI protocol) - Django (via WSGI protocol) Flup is another way to run with wsgi for any webserver that can speak FCGI, SCGI or AJP What is FLUP? What is AJP? How did Django come in the picture? These questions raise quetions about PHP. How is it actually running? What technology is it using? mod_php & mod_python what are the differences? In future if I want to use Perl or Java then again will I have to get confused? Kindly can someone explain things clearly and give a Complete Picture.

    Read the article

  • MySQL ORDER BY DESC is fast but ASC is very slow

    - by Pepper
    Hello, I'm completely stumped on this one. For some reason when I sort this query by DESC it's super fast, but if sorted by ASC it's extremely slow. This takes about 150 milliseconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published DESC LIMIT 0, 50; This takes about 32 seconds: SELECT posts.id FROM posts USE INDEX (published) WHERE posts.feed_id IN ( 4953,622,1,1852,4952,76,623,624,10 ) ORDER BY posts.published ASC LIMIT 0, 50; The EXPLAIN is the same for both queries. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts index NULL published 5 NULL 50 Using where I've tracked it down to "USE INDEX (published)". If I take that out it's the same performance both ways. But the EXPLAIN shows the query is less efficient overall. id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE posts range feed_id feed_id 4 \N 759 Using where; Using filesort And here's the table. CREATE TABLE `posts` ( `id` int(20) NOT NULL AUTO_INCREMENT, `feed_id` int(11) NOT NULL, `post_url` varchar(255) NOT NULL, `title` varchar(255) NOT NULL, `content` blob, `author` varchar(255) DEFAULT NULL, `published` int(12) DEFAULT NULL, `updated` datetime NOT NULL, `created` datetime NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `post_url` (`post_url`,`feed_id`), KEY `feed_id` (`feed_id`), KEY `published` (`published`) ) ENGINE=InnoDB AUTO_INCREMENT=196530 DEFAULT CHARSET=latin1; Is there a fix for this? Thanks!

    Read the article

  • how to handle multiple profiles per user?

    - by Scott Willman
    I'm doing something that doesn't feel very efficient. From my code below, you can probably see that I'm trying to allow for multiple profiles of different types attached to my custom user object (Person). One of those profiles will be considered a default and should have an accessor from the Person class. Can this be done better? from django.db import models from django.contrib.auth.models import User, UserManager class Person(User): public_name = models.CharField(max_length=24, default="Mr. T") objects = UserManager() def save(self): self.set_password(self.password) super(Person, self).save() def _getDefaultProfile(self): def_teacher = self.teacher_set.filter(default=True) if def_teacher: return def_teacher[0] def_student = self.student_set.filter(default=True) if def_student: return def_student[0] def_parent = self.parent_set.filter(default=True) if def_parent: return def_parent[0] return False profile = property(_getDefaultProfile) def _getProfiles(self): # Inefficient use of QuerySet here. Tolerated because the QuerySets should be very small. profiles = [] if self.teacher_set.count(): profiles.append(list(self.teacher_set.all())) if self.student_set.count(): profiles.append(list(self.student_set.all())) if self.parent_set.count(): profiles.append(list(self.parent_set.all())) return profiles profiles = property(_getProfiles) class BaseProfile(models.Model): person = models.ForeignKey(Person) is_default = models.BooleanField(default=False) class Meta: abstract = True class Teacher(BaseProfile): user_type = models.CharField(max_length=7, default="teacher") class Student(BaseProfile): user_type = models.CharField(max_length=7, default="student") class Parent(BaseProfile): user_type = models.CharField(max_length=7, default="parent")

    Read the article

  • How to read LARGE Sqlite file to be copied into Android emulator, or device from assets folder?

    - by Peter SHINe ???
    I guess many people already read this article: Using your own SQLite database in Android applications: http://www.reigndesign.com/blog/using-your-own-sqlite-database-in-android-applications/comment-page-2/#comment-12368 However it's keep bringing IOException at while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } I’am trying to use a large DB file. It’s as big as 8MB I built it using sqlite3 in Mac OS X, inserted UTF-8 encoded strings (for I am using Korean), added android_meta table with ko_KR as locale, as instructed above. However, When I debug, it keeps showing IOException at length=myInput.read(buffer) I suspect it’s caused by trying to read a big file. If not, I have no clue why. I tested the same code using much smaller text file, and it worked fine. Can anyone help me out on this? I’ve searched many places, but no place gave me the clear answer, or good solution. Good meaning efficient or easy. I will try use BufferedInput(Output)Stream, but if the simpler one cannot work, I don’t think this will work either. Can anyone explain the fundamental limits in file input/output in Android, and the right way around it, possibly? I will really appreciate anyone’s considerate answer. Thank you. WITH MORE DETAIL: private void copyDataBase() throws IOException{ //Open your local db as the input stream InputStream myInput = myContext.getAssets().open(DB_NAME); // Path to the just created empty db String outFileName = DB_PATH + DB_NAME; //Open the empty db as the output stream OutputStream myOutput = new FileOutputStream(outFileName); //transfer bytes from the inputfile to the outputfile byte[] buffer = new byte[1024]; int length; while ((length = myInput.read(buffer))>0){ myOutput.write(buffer, 0, length); } //Close the streams myOutput.flush(); myOutput.close(); myInput.close(); }

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • "Priming" a whole database in MSSQL for first-hit speed

    - by David Spillett
    For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards. One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM? It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >