Search Results

Search found 9199 results on 368 pages for 'topological sort'.

Page 311/368 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Best implementation of Java Queue?

    - by Georges Oates Larsen
    I am working (In java) on a recursive image processing algorithm that recursively traverses the pixels of the image, outward from a center point. Unfortunately... That causes stack overflows, so I have decided to switch to a Queue-based algorithm. Now, this is all fine and dandy -- But considering the fact that its queue will be analyzing THOUSANDS of pixels in a very short amount of time, while constantly popping and pushing, WITHOUT maintaining a predictable state (It could be anywhere between length 100, and 20000); The queue implementation needs to have significantly fast popping and pushing abilities. A linked list seems attractive due to its ability to push elements unto its self without rearranging anything else in the list, but in order for it to be fast enough, it would need easy access to both its head, AND its tail (or second-to-last node if it were not doubly-linked). Sadly, though I cannot find any information related to the underlying implementation of linked lists in Java, so it's hard to say if a linked list is really the way to go... This brings me to my question... What would be the best implementation of the Queue interface in Java for what I intend to do? (I do not wish to edit or even access anything other than the head and tail of the queue -- I do not wish to do any sort of rearranging, or anything. On the flip side, I DO intend to do a lot of pushing and popping, and the queue will be changing size quite a bit, so preallocating would be inefficient)

    Read the article

  • MySql "comments" parameter as descriptor?

    - by Nick
    So, I'm trying to learn a lot at once, and this place is really helpful! I'm making a little running log website for myself and maybe a few other people, and I have it so that the user can add workouts for each day. With each workout, I have a variety of information the user can fill out for the workout, such as running distance, time, quality of run, course, etc... I store this in a MySql database as a table with fields titled "distance", "time", "runquality", etc... Now, these field titles don't match up with what I want displayed on the running log, so I was thinking of using the "Comments" attribute for a field to store its human-readable title--thus the field "runquality" would have "Quality of run" as its comment, and then I would pull the comment with a SQL query and display it instead of the field name. Is this a good theoretical/practical way of going about it? And what sort of SQL would I use to pull the comment for the field anyway? Secondly, suppose I want to add the ability for the user to create their own workout descriptors. So say a user wants to add a "temperature" descriptor for their workout. Should I create a script that adds fields to my workout table, or should I create a separate table listing only workout descriptors and somehow link the descriptor table with the "contents" table? I haven't learned any theory about database design or anything so any help is appreciated!

    Read the article

  • Automatically restarting Erlang applications

    - by Nick
    I recently ran into a bug where an entire Erlang application died, yielding a log message that looked like this: =INFO REPORT==== 11-Jun-2010::11:07:25 === application: myapp exited: shutdown type: temporary I have no idea what triggered this shutdown, but the real problem I have is that it didn't restart itself. Instead, the now-empty Erlang VM just sat there doing nothing. Now, from the research I've done, it looks like there are other "start types" you can give an application: 'transient' and 'permanent'. If I start a Supervisor within an application, I can tell it to make a particular process transient or permanent, and it will automatically restart it for me. However, according to the documentation, if I make an application transient or permanent, it doesn't restart it when it dies, but rather it kills all the other applications as well. What I really want to do is somehow tell the Erlang VM that a particular application should always be running, and if it goes down, restart it. Is this possible to do? (I'm not talking about implementing a supervisor on top of my application, because then it's a catch 22: what if my supervisor process crashes? I'm looking for some sort of API or setting that I can use to have Erlang monitor and restart my application for me.) Thanks!

    Read the article

  • PHP Outputting File Attachments with Headers

    - by OneNerd
    After reading a few posts here I formulated this function which is sort of a mishmash of a bunch of others: function outputFile( $filePath, $fileName, $mimeType = '' ) { // Setup $mimeTypes = array( 'pdf' => 'application/pdf', 'txt' => 'text/plain', 'html' => 'text/html', 'exe' => 'application/octet-stream', 'zip' => 'application/zip', 'doc' => 'application/msword', 'xls' => 'application/vnd.ms-excel', 'ppt' => 'application/vnd.ms-powerpoint', 'gif' => 'image/gif', 'png' => 'image/png', 'jpeg' => 'image/jpg', 'jpg' => 'image/jpg', 'php' => 'text/plain' ); // Send Headers //-- next line fixed as per suggestion -- header('Content-Type: ' . $mimeTypes[$mimeType]); header('Content-Disposition: attachment; filename="' . $fileName . '"'); header('Content-Transfer-Encoding: binary'); header('Accept-Ranges: bytes'); header('Cache-Control: private'); header('Pragma: private'); header('Expires: Mon, 26 Jul 1997 05:00:00 GMT'); readfile($filePath); } I have a php page (file.php) which does something like this (lots of other code stripped out): // I run this thru a safe function not shown here $safe_filename = $_GET['filename']; outputFile ( "/the/file/path/{$safe_filename}", $safe_filename, substr($safe_filename, -3) ); Seems like it should work, and it almost does, but I am having the following issues: When its a text file, I am getting a strange symbol as the first letter in the text document When its a word doc, it is corrupt (presumably that same first bit or byte throwing things off). I presume all other file types will be corrupt - have not even tried them Any ideas on what I am doing wrong? Thanks - UPDATE: changed line of code as suggested - still same issue.

    Read the article

  • pyPDF - Retrieve page numbers from document

    - by SquidneyPoitier
    At the moment I'm looking into doing some PDF merging with pyPdf, but sometimes the inputs are not in the right order, so I'm looking into scraping each page for its page number to determine the order it should go in (e.g. if someone split up a book into 20 10-page PDFs and I want to put them back together). I have two questions - 1.) I know that sometimes the page number is stored in the document data somewhere, as I've seen PDFs that render on Adobe as something like [1243] (10 of 150), but I've read documents of this sort into pyPDF and I can't find any information indicating the page number - where is this stored? 2.) If avenue #1 isn't available, I think I could iterate through the objects on a given page to try to find a page number - likely it would be its own object that has a single number in it. However, I can't seem to find any clear way to determine the contents of objects. If I run: pdf.getPage(0).getContents() This usually either returns: {'/Filter': '/FlateDecode'} or it returns a list of IndirectObject(num, num) objects. I don't really know what to do with either of these and there's no real documentation on it as far as I can tell. Is anyone familiar with this kind of thing that could point me in the right direction?

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • How to get acts on taggable working

    - by Schipperius
    I am new to ruby on rails (and programming) and this is probably a really stupid question. I am using Rails 3.2 and trying to use acts_as_taggable_on to generate tags on articles and to have those tags show on article index and show pages as a clickable links. I have tags clickable on both the article show and index pages, but the links just go back to the index page and don't sort according to the tag name. I have scoured the Internet and pieced together the code below from various sources, but I am clearly missing something. Any help is greatly appreciated, as I have exhausted my seemingly limited knowledge! Thanks. Here is what I have: class ArticlesController < ApplicationController def tagged @articles = Article.all(:order => 'created_at DESC') @tags = Article.tag_counts_on(:tags) @tagged_articles = Article.tagged_with(params[:tags]) respond_to do |format| format.html # index.html.erb format.json { render :json => @articles } end end def index @article = Article.new @articles = Article.paginate :page => params[:page], :per_page => 3 @tags = Article.tag_counts_on(:tags) respond_to do |format| format.html # index.html.erb format.json { render json: @articles } end end module ArticlesHelper include ActsAsTaggableOn::TagsHelper end class Article < ActiveRecord::Base acts_as_ordered_taggable acts_as_ordered_taggable_on :tags, :location, :about attr_accessible :tag_list scope :by_join_date, order("created_at DESC") end article/index.html.erb <% tag_cloud(@tags, %w(tag1 tag2 tag3 tag4)) do |tag| %> <%= link_to tag.name, articles_path(:id => tag.name) %> <% end %> article/show.html.erb <%= raw @article.tags.map { |tag| link_to tag.name, articles_path(:tag_id => tag) }.join(" | ") %>

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • In Perl, how can I wait for threads to end in parallel?

    - by Pmarcoen
    I have a Perl script that launches 2 threads,one for each processor. I need it to wait for a thread to end, if one thread ends a new one is spawned. It seems that the join method blocks the rest of the program, therefore the second thread can't end until everything the first thread does is done which sort of defeats its purpose. I tried the is_joinable method but that doesn't seem to do it either. Here is some of my code : use threads; use threads::shared; @file_list = @ARGV; #Our file list $nofiles = $#file_list + 1; #Real number of files $currfile = 1; #Current number of file to process my %MSG : shared; #shared hash $thr0 = threads->new(\&process, shift(@file_list)); $currfile++; $thr1 = threads->new(\&process, shift(@file_list)); $currfile++; while(1){ if ($thr0->is_joinable()) { $thr0->join; #check if there are files left to process if($currfile <= $nofiles){ $thr0 = threads->new(\&process, shift(@file_list)); $currfile++; } } if ($thr1->is_joinable()) { $thr1->join; #check if there are files left to process if($currfile <= $nofiles){ $thr1 = threads->new(\&process, shift(@file_list)); $currfile++; } } } sub process{ print "Opening $currfile of $nofiles\n"; #do some stuff if(some condition){ lock(%MSG); #write stuff to hash } print "Closing $currfile of $nofiles\n"; } The output of this is : Opening 1 of 4 Opening 2 of 4 Closing 1 of 4 Opening 3 of 4 Closing 3 of 4 Opening 4 of 4 Closing 2 of 4 Closing 4 of 4

    Read the article

  • encapsulation in python list (want to use " instead of ')

    - by Codehai
    I have a list of users users["pirates"] and they're stored in the format ['pirate1','pirate2']. If I hand the list over to a def and query for it in MongoDB, it returns data based on the first index (e.g. pirate1) only. If I hand over a list in the format ["pirate1","pirate"], it returns data based on all the elements in the list. So I think there's something wrong with the encapsulation of the elements in the list. My question: can I change the encapsulation from ' to " without replacing every ' on every element with a loop manually? Short Example: aList = list() # get pirate Stuff # users["pirates"] is a list returned by a former query # so e.g. users["pirates"][0] may be peter without any quotes for pirate in users["pirates"]: aList.append(pirate) aVar = pirateDef(aList) print(aVar) the definition: def pirateDef(inputList = list()): # prepare query col = mongoConnect().MYCOL # query for pirates Arrrr pirates = col.find({ "_id" : {"$in" : inputList}} ).sort("_id",1).limit(50) # loop over users userList = list() for person in pirates: # do stuff that has nothing to do with the problem # append user to userlist userList.append(person) return userList If the given list has ' encapsulation it returns: 'pirates': [{'pirate': 'Arrr', '_id': 'blabla'}] If capsulated with " it returns: 'pirates' : [{'_id': 'blabla', 'pirate' : 'Arrr'}, {'_id': 'blabla2', 'pirate' : 'cheers'}] EDIT: I tried figuring out, that the problem has to be in the MongoDB query. The list is handed over to the Def correctly, but after querying pirates only consists of 1 element... Thanks for helping me Codehai

    Read the article

  • Removing a pattern from the beggining and end of a string in ruby

    - by seaneshbaugh
    So I found myself needing to remove <br /> tags from the beginning and end of strings in a project I'm working on. I made a quick little method that does what I need it to do but I'm not convinced it's the best way to go about doing this sort of thing. I suspect there's probably a handy regular expression I can use to do it in only a couple of lines. Here's what I got: def remove_breaks(text) if text != nil and text != "" text.strip! index = text.rindex("<br />") while index != nil and index == text.length - 6 text = text[0, text.length - 6] text.strip! index = text.rindex("<br />") end text.strip! index = text.index("<br />") while index != nil and index == 0 text = test[6, text.length] text.strip! index = text.index("<br />") end end return text end Now the "<br />" could really be anything, and it'd probably be more useful to make a general use function that takes as an argument the string that needs to be stripped from the beginning and end. I'm open to any suggestions on how to make this cleaner because this just seems like it can be improved.

    Read the article

  • How to implement a collection (list, map?) of complicated strings in Java?

    - by Alex Cheng
    Hi all. I'm new here. Problem -- I have something like the following entries, 1000 of them: args1=msg args2=flow args3=content args4=depth args6=within ==> args5=content args1=msg args2=flow args3=content args4=depth args6=within args7=distance ==> args5=content args1=msg args2=flow args3=content args6=within ==> args5=content args1=msg args2=flow args3=content args6=within args7=distance ==> args5=content args1=msg args2=flow args3=flow ==> args4=flowbits args1=msg args2=flow args3=flow args5=content ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args5=content args1=msg args2=flow args4=depth ==> args3=content args1=msg args2=flow args4=depth args5=content ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within args7=distance ==> args3=content I'm doing some sort of suggestion method. Say, args1=msg args2=flow args3=flow == args4=flowbits If the sentence contains msg, flow, and another flow, then I should return the suggestion of flowbits. How can I go around doing it? I know I should scan (whenever a character is pressed on the textarea) a list or array for a match and return the result, but, 1000 entries, how should I implement it? I'm thinking of HashMap, but can I do something like this? <"msg,flow,flow","flowbits" Also, in a sentence the arguments might not be in order, so assuming that it's flow,flow,msg then I can't match anything in the HashMap as the key is "msg,flow,flow". What should I do in this case? Please help. Thanks a million!

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • Facebook-ios-sdk with embedded UIWebView

    - by Benchtop Creative
    I'm working with the new facebook-ios-sdk and have successfully integrated the api into my native app. I am able to authenticate a user and properly setup permissions using a popup dialog with the ios-sdk classes. For a portion of my app I need to use the facebook connection within a UIWebView, using javascript and html to process data within the webview. Given that the user is already logged in and authenticated via the above routine, I would have assumed that the UIWebView would share those credentials, or that there would at least be some way to pass or assign the credentials to the webview. Unfortunately, I found this earlier post which seems to suggest that this scheme doesn't quite work (iOS - being logged-in in a webView after logging in with the SDK). Has anyone else encountered this and/or found a work around? This seems like it would be a fairly straightforward use case given that I'm not trying to launch mobile safari or something like that - it's all within the same native app. It just seems like there must be some sort of easy trick or setting that I'm missing. Maybe somehow setting cookies in the new UIWebView? or something like this?

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • Android Layout + Programming

    - by MD
    I'm trying to create a "scrollable" layout in Android. Even using developers.android.com, though, I feel a little bit lost at the moment. I'm somewhat new to Java, but not so much that I feel I should be having these issues--being new to Android is the bigger problem right now. The layout I'm trying to create should scroll in a sort of a "grid". I THINK what I'm looking for is the Gallery view, but I'm really lost as to how to implement it at the moment. I want it to "snap" to center the frame, like in the actual Gallery application. Essentially, if I had a photo gallery of 9 pictures, the idea is to scroll between them up/down AND side to side, in a 3x3 manner. Doesn't need to dynamically adjust, or anything like that, I just want a grid I can scroll through. I'm also not asking for anyone to give me explicit code for it--I'm trying to learn, more than anything. But pointing me in the right direction for helpful layout programming resources would be greatly appreciated, and confirming if it's a Gallery view I'm looking for would also be really helpful. Thanks in advance.

    Read the article

  • EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

    - by Martin
    Hi there, I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If i understnad this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case silverlight) The client then can change the entity or update it. At this point i believe it is self tracking???? This is where i sort of get a bit confused as there are a lot of reported problems with STEs not tracking.. Anyway... Then to update i just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT everytime? In every method? If i am creaitng a new objectcontext everytime in everymethod on my WCF then don't i need to re-attach the STE to the objectcontext? So basically this alone wouldn't work?? using(var ctx = new MyContext()) { ctx.Orders.ApplyChanges(order); ctx.SaveChanges(); } Or should i be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same wcf instance uses the same objectcontext? I could create and destroy the wcf service in each method call from the client - hence creating in effect a new objectcontext each time. I understand that it isn't a good idea to keep the objectcontext alive for very long. Any insight or information would be gratefully appreciated thanks

    Read the article

  • Refining Search Results [PHP/MySQL]

    - by Dae
    I'm creating a set of search panes that allow users to tweak their results set after submitting a query. We pull commonly occurring values in certain fields from the results and display them in order of their popularity - you've all seen this sort of thing on eBay. So, if a lot of rows in our results were created in 2009, we'll be able to click "2009" and see only rows created in that year. What in your opinion is the most efficient way of applying these filters? My working solution was to discard entries from the results that didn't match the extra arguments, like: while($row = mysql_fetch_assoc($query)) { foreach($_GET as $key => $val) { if($val !== $row[$key]) { continue 2; } } // Output... } This method should hopefully only query the database once in effect, as adding filters doesn't change the query - MySQL can cache and reuse one data set. On the downside it makes pagination a bit of a headache. The obvious alternative would be to build any additional criteria into the initial query, something like: $sql = "SELECT * FROM tbl MATCH (title, description) AGAINST ('$search_term')"; foreach($_GET as $key => $var) { $sql .= " AND ".$key." = ".$var; } Are there good reasons to do this instead? Or are there better options altogether? Maybe a temporary table? Any thoughts much appreciated!

    Read the article

  • SQLAlchemy, one to many vs many to one

    - by sadvaw
    Dear Everyone, I have the following data: CREATE TABLE `groups` ( `bookID` INT NOT NULL, `groupID` INT NOT NULL, PRIMARY KEY(`bookID`), KEY( `groupID`) ); and a book table which basically has books( bookID, name, ... ), but WITHOUT groupID. There is no way for me to determine what the groupID is at the time of the insert for books. I want to do this in sqlalchemy. Hence I tried mapping Book to the books joined with groups on book.bookID=groups.bookID. I made the following: tb_groups = Table( 'groups', metadata, Column('bookID', Integer, ForeignKey('books.bookID'), primary_key=True ), Column('groupID', Integer), ) tb_books = Table( 'books', metadata, Column('bookID', Integer, primary_key=True), tb_joinedBookGroup = sql.join( tb_books, tb_groups, \ tb_books.c.bookID == tb_groups.c.bookID) and defined the following mapper: mapper( Group, tb_groups, properties={ 'books': relation(Book, backref='group') }) mapper( Book, tb_joinedBookGroup ) ... However, when I execute this piece of code, I realized that each book object has a field groups, which is a list, and each group object has books field which is a singular assigment. I think my definition here must have been causing sqlalchemy to be confused about the many-to-one vs one-to-many relationship. Can someone help me sort this out? My desired goal is g.books = [b, b, b, .. ] book.group = g, where g is an instance of group, and b is an instance of book

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • Strongly typed dynamic Linq sorting

    - by David
    I'm trying to build some code for dynamically sorting a Linq IQueryable<. The obvious way is here, which sorts a list using a string for the field name http://dvanderboom.wordpress.com/2008/12/19/dynamically-composing-linq-orderby-clauses/ However I want one change - compile time checking of field names, and the ability to use refactoring/Find All References to support later maintenance. That means I want to define the fields as f=f.Name, instead of as strings. For my specific use I want to encapsulate some code that would decide which of a list of named "OrderBy" expressions should be used based on user input, without writing different code every time. Here is the gist of what I've written: var list = from m Movies select m; // Get our list var sorter = list.GetSorter(...); // Pass in some global user settings object sorter.AddSort("NAME", m=m.Name); sorter.AddSort("YEAR", m=m.Year).ThenBy(m=m.Year); list = sorter.GetSortedList(); ... public class Sorter ... public static Sorter GetSorter(this IQueryable source, ...) The GetSortedList function determines which of the named sorts to use, which results in a List object, where each FieldData contains the MethodInfo and Type values of the fields passed in AddSort: public SorterItem AddSort(Func field) { MethodInfo ... = field.Method; Type ... = TypeOf(TKey); // Create item, add item to diction, add fields to item's List // The item has the ThenBy method, which just adds another field to the List } I'm not sure if there is a way to store the entire field object in a way that would allow it be returned later (it would be impossible to cast, since it is a generic type) Is there a way I could adapt the sample code, or come up with entirely new code, in order to sort using strongly typed field names after they have been stored in some container and retrieved (losing any generic type casting)

    Read the article

  • Best Practice for Summary Footer (and the like) in MVC

    - by benpage
    Simple question on best practice. Say I have: public class Product { public string Name { get; set; } public string Price { get; set; } public int CategoryID { get; set; } public bool IsAvailable { get; set; } } and i have a view using IEnumerable< Product as the model, and i iterate through the Products on the page and want to show the total of the prices at the end of the list, should I use: <%= Model.Sum(x=> x.Price) %> or should I use some other method? This may extend to include more involved things like: <%= Model.Where(x=> x.CategoryID == 5 && x.IsAvailable).Sum(x=> x.Price) %> and even <% foreach (Product p in Model.Where(x=> x.IsAvailable) {%> -- insert html -- <% } %> <% foreach (Product p in Model.Where(x=> !x.IsAvailable) {%> -- insert html -- <% } %> I guess this comes down to should I have that sort of code within my view, or should i be passing it to my view in ViewData? Or perhaps some other way?

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >