Search Results

Search found 8692 results on 348 pages for 'per magnusson'.

Page 222/348 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Processing variable number of form fields

    - by a_m0d
    I am working on a form which displays information about orders. Each order has a unique id, but they are not necessarily sequential on the form. Also, the number of fields can vary (one field per row on the form). The input into the form will not be mapped straight into the database, but will be added to the current value in the database, and then saved. An example of the form is in the picture below - the callout on the right shows the id for each row. I know how to generate the form like this, but I can't work out how I can easily process each of these rows reliably. I also know how to give each of the fields a unique identifier, like name="row-23", but how can I translate that name so that I can update the related record in the database?

    Read the article

  • Vim: Replacing a line with another one yanked before

    - by duddle
    At least once per day i have the following situation: A: This line should also replace line X ... X: This is line should be replaced I believe that I don't perform that task efficiently. What I do: Go to line A: AG Yank line A: yy Go to line X: XG Paste line A: P Move to old line: j Delete old line: dd This has the additional disadvantage that line X is now in the default register, which is annoying if I find another line that should be replaced with A. Yanking to and pasting from an additional register ("ayy, "aP) makes this simple task even less efficient. My Questions: Did I miss a built-in Vim command to replace a line yanked before? If not, how can I bind my own command that leaves (or restores) the yanked line in the default register?

    Read the article

  • Displaying Nested Array Content with Time Delay in Flex

    - by MooCow
    I have a JSON array that look like this: (array here) I'm trying to use Flex to display each Project and its Milestone elements similar to a nested for-loop for 15 seconds per Milestone element before advancing to the next Project. I was shown a technique that works well for something without another array buried into it. var key:int = 0; var timer:timer = new timer (10000, project.length); timer.addEventListener (TimerEvent.TIMER, function showStuff(event:EVENT):void { trace project[key].projectName; key++; }); timer.start(); But that only replicate a single FOR-LOOP and not a nested FOR-LOOP. Any suggestions?

    Read the article

  • Is there anything like memcached, but for sorted lists?

    - by depesz
    I have a situation where I could really benefit from having system like memcached, but with the ability to store (per each key) sorted list of elements, and modifying the list by addition of values. For example: something.add_to_sorted_list( 'topics_list_sorted_by_title', 1234, 'some_title') something.add_to_sorted_list( 'topics_list_sorted_by_title', 5436, 'zzz') something.add_to_sorted_list( 'topics_list_sorted_by_title', 5623, 'aaa') Which I then could use like this: something.get_list_size( 'topics_list_sorted_by_title' ) // returns 3 something.get_list_elements( 'topics_list_sorted_by_title', 1, 10 ) // returns: 5623, 1234, 5436 Required system would allow me to easily get items count in every array, and fetch any number of values from the array, with the assumption that the values are sorted using attached value. I hope that the description is clear. And the question is relatively simple: is there any such system?

    Read the article

  • How does Ruby's Enumerator object iterate externally over an internal iterator?

    - by Salman Paracha
    As per Ruby's documentation, the Enumerator object uses the each method (to enumerate) if no target method is provided to the to_enum or enum_for methods. Now, let's take the following monkey patch and its enumerator, as an example o = Object.new def o.each yield 1 yield 2 yield 3 end e = o.to_enum loop do puts e.next end Given that the Enumerator object uses the each method to answer when next is called, how do calls to the each method look like, every time next is called? Does the Enumeartor class pre-load all the contents of o.each and creates a local copy for enumeration? Or is there some sort of Ruby magic that hangs the operations at each yield statement until next is called on the enumeartor? If an internal copy is made, is it a deep copy? What about I/O objects that could be used for external enumeration? I'm using Ruby 1.9.2.

    Read the article

  • Space requirements of a merge-sort

    - by Arkaitz Jimenez
    I'm trying to understand the space requirements for a Mergesort, O(n). I see that time requirements are basically, amount of levels(logn) * merge(n) so that makes (n log n). Now, we are still allocating n per level, in 2 different arrays, left and right. I do understand that the key here is that when the recursive functions return the space gets deallocated, but I'm not seeing it too obvious. Besides, all the info I find, just states space required is O(n) but don't explain it. Any hint? function merge_sort(m) if length(m) = 1 return m var list left, right, result var integer middle = length(m) / 2 for each x in m up to middle add x to left for each x in m after middle add x to right left = merge_sort(left) right = merge_sort(right) result = merge(left, right) return result

    Read the article

  • CakePHP Routing Alias, no prefix

    - by Jason McCreary
    I have a dashboard with a series of widgets. Per specification, the widgets need to be buried under a /widgets/ directory. So I have added the following to my routes.php Router::connect('/widget/:controller/:action/*', array()); But I seem to be running into trouble on widgets/links/ and widgets/links/view/1 I am new to CakePHP, but this doesn't seem all that impressive. I have yet to find anything in the Book or by search. So any help is appreciated. Thanks.

    Read the article

  • An MP3 parser to extract numbered frames?

    - by Xepoch
    I am writing a streaming application for MP3 (CBR). It is all passthru, meaning I don't have to decode/encode, I just need to pass on the data as I see it come through. I want to be able to count the MP3 frames as they passthru (and some other stuff like throughput calculations). According to the MP3 frame header spec, the sync word appears to be 11 bits of 1s, however I notice (naturally) that the frame payload which I should safely assume to be binary and thus it is not odd at all to see 11 1s in sequence. My questions: Is there a Unix/Linux MP3 parser utility (dd-style) that can pull numbered frames from an MP3 file/pipe? Any perl wisdom here? How does one delineate an MP3 header block from any other binary payload data? and lastly: Is a constant bitrate (CBR) MP3 defined by payload bytes or are the header bytes included in the aggregate # of bytes/bits per any given timeslice? Thanks,

    Read the article

  • How Can I Join Two DB Tables and Return Lowest Price From Joined Table

    - by Jason
    I have two tables, the first table has the product and the second table the prices. The price table could have more than one price per product but I only want to display the lowest. But I keep getting all the prices returned and I'm having trouble figuring out how to do it. this is what I get returned in my query: SELECT * FROM products AS pr JOIN prices AS p ON pr.id = p.product_id WHERE pr.live = 1 AND p.live = 1 id product1 name description £100 id product1 name description £300 id product1 name description £200 id product2 name description £50 id product2 name description £80 id product2 name description £60 id product3 name description £222 id product3 name description £234 id product3 name description £235 but I'm after: id product1 name description £100 id product2 name description £50 id product3 name description £222 Any help would be appreciated

    Read the article

  • Paging Recordsets from SQL Serverside

    - by Jonno
    I've been banging my head off this one for a while. I want to call 1k records from a SQL database and page them per 100. In classic ASP (where I'm moving from) this was dead easy to do with ADODB but with VB using ADO.net I can't find a single way that doesn't involve stored procs (which I want to avoid for now). It seems really stupid to call all 1k and sort it programmatically. Edit: It's SQL Server 2005 / .net 4.0 / Visual Studio 2010. Edit 2: Just to reiterate, I have Googled extensively and don't want to use stored procedures. There are many ways to get paged data but everything I see involves paging the data in the program rather than from the server.

    Read the article

  • Change the contents of a UITableView via a swipe?

    - by Mark
    Im currently using a UITableView like any other, and I am researching into the ability to perform a swipe gesture on the screen, which will then shift the contents of the visible table over to display new content for example: swiping right-to-left on the screen would change (via animation) the contents within each of the cells on screen to show new data. What I can do is detect a swipe on the cells, or perhaps on the UITableViewController, but what I dont know how to do two fold: 1) Change data in all cells (could you have a set of hidden views within a custom table cell that animate in and out of each cell per swipe?) 2) How can you do this to all cells? Thanks a lot Mark

    Read the article

  • Ideally How many connections I have to open ?

    - by ranjith-kumar-u
    Hi All, Recently I attended interview in java, the interviewer asked a question like below: I have a request which go throgh A,B,C modules and response go back throgh A , in module A I need to talk to database and again in module C I need to talk to database, so in this situation how many connections you will open and where do you close those connections? My Answer: I said that in module A I will open a connection and I will close it then and there, then control go to module B then module C, in module C again I will open one more connection and i will close it again. then he asked me again another question I want to open one connection per one request processing, how can i do this?

    Read the article

  • Access Qry Questions

    - by kralco626
    It was suggested that I repost this questions as I didn't do a very good job discribing my issue the first time. (http://stackoverflow.com/questions/2921286/access-question) THE SITUATION: I have inspections from many months of many years. Sometimes there is more than one inspection in a month, sometimes there is no inspection. However, the report that is desired by the clients requires that I have EXACTLY ONE record per month for the time frame they request the report. They understand the data issues and have stated that if there is more than one inspection in a month to take the latest one. If the is not an inspection for that month, go back in time untill you find one and use that one. So a sample of the data is as follows: (I am including many records because I was told I did not include enough data on my last try) equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 600 8/20/2008 1:12 PM 1 8 2008 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 1 2010 1600 1/15/2010 1:00 PM 2 1 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM After all the transformations to the data are done, it should look like this: equip_id month year runtime date 1 5 2008 400 5/10/2008 12:34 PM 1 6 2008 400 5/10/2008 12:34 PM 1 7 2008 500 7/12/2008 1:45 PM 1 8 2008 605 8/30/2008 8:00 AM 1 9 2008 605 8/30/2008 8:00 AM 1 10 2008 605 8/30/2008 8:00 AM 1 11 2008 605 8/30/2008 8:00 AM 1 12 2008 605 8/30/2008 8:00 AM 1 1 2009 605 8/30/2008 8:00 AM 1 2 2009 605 8/30/2008 8:00 AM 1 3 2009 605 8/30/2008 8:00 AM 1 4 2009 605 8/30/2008 8:00 AM 1 5 2009 605 8/30/2008 8:00 AM 1 6 2009 605 8/30/2008 8:00 AM 1 7 2009 605 8/30/2008 8:00 AM 1 8 2009 605 8/30/2008 8:00 AM 1 9 2009 605 8/30/2008 8:00 AM 1 10 2009 605 8/30/2008 8:00 AM 1 11 2009 605 8/30/2008 8:00 AM 1 12 2009 605 8/30/2008 8:00 AM 1 1 2010 2000 1/12/2010 2:00 PM 1 2 2010 2000 1/12/2010 2:00 PM 1 3 2010 2200 3/24/2010 10:00 AM 2 7 2009 1000 7/20/2009 8:00 AM 2 8 2009 1000 7/20/2009 8:00 AM 2 9 2009 1000 7/20/2009 8:00 AM 2 10 2009 1400 10/14/2009 9:00 AM 2 11 2009 1400 10/14/2009 9:00 AM 2 12 2009 1400 10/14/2009 9:00 AM 2 1 2010 1610 1/30/2010 4:00 PM 2 2 2010 1610 1/30/2010 4:00 PM 2 3 2010 1800 3/15/2010 1:00PM I think that this is the most accurate dipiction of the problem that I can give. I will now say what I have tried. Although if someone else has a better approach, I am perfectly willing to throw away what I have done and do it differently... STEP 1: create a query that removes the duplicates from the data. Ie. only one record per equip_id for each month/year, keeping the latest one. (done successfully) STEP 2: create a table of the date ranges the client wants the report for. (This is done dynamically at runtime) This table two field, Month and Year. So if the client wants a report from FEb 2008 to March 2010 the table would look like: Month Year 2 2008 3 2008 . . . 12 2008 1 2009 . . . 12 2009 1 2010 2 2010 3 2010 I then left joined this table with my query from step 1. So now I have a record for every month and every year that they want the report for, with nulls(or blanks) or sometimes 0s (not sure why, access is weird, but sometiems they are nulls and sumtimes they are 0s...) for the runtimes that are not avaiable. I don't particurally like this solution, but ill do it if i have to. (this is also done successfully) STEP 3: Fill in the missing runtime values. This I HAVE NOT done successfully. Note that if the request range for the report is feb 2008 to march 2010 and the oldest record for a particular equip_id is say june 2008, it is O.K. for the runtimes to be null (or zeros) for feb - may 2008. I am working with the following query for this step: SELECT equip_id as e_id,year,month, (select top 1 runhours from qry_1_c_One_Record_per_Month a where a.equip_id = e_id order by year,month) FROM qry_1_c_One_Record_per_Month where runhours is null or runhours = 0; UNION SELECT equip_id, year, month, runhours FROM qry_1_c_One_Record_per_Month WHERE .runhours Is Not Null And runhours <> 0 However I clearly can't check the a.equip_id = e_id ... so i don't have anyway to make sure i'm looking at the correct equip_id SUMMARY: So like i said i'm willing to throw away any part, or all of what I tried. Just trying to give everyone a complete picture. I REALLY apreciate ANY help! Thanks so much in advance!

    Read the article

  • re-open background application via notification item

    - by user356764
    I got an app with tabs and a notification bar entry, when I send it to background (click on home button) and try to re-open the application via click on the notification bar, the app restarts (last selected tab is lost). When I hold the home button if the application is in the background and select it from there or click the app's icon on the homescreen, the previous state is restored per default (the correct tab is selected) IMO the intent of the notification is wrong, but I'm not sure how to fix it. In short: How to get a background application back to foreground when I click the notification entry? thx!

    Read the article

  • UITableViewCell and strange behaviour in grouped UITableView

    - by evangelion2100
    I'm working on a grouped UITableView, with 4 sections with one row per section, and have a strange behaviour with the cells. The cells are plain UITableViewCells, but the height of the cells are around 60 - 80 pixel. Now the tableview renders the cells correct with round corners, but when I select the cells, they appear blue and recangle. I don't know why the cells behave like this, because I have another grouped UITableView with custom cells and 88 pixel height and those cells work like they should. If I change the height to the default height of 44 pixel, the cells behave like the should. Does anyone know about this behaviour and what the cause is? Like I mentioned, I don't do any fancy stuff I use default UITableViewCells in a static, grouped UITableView with 4 sections with 1 row in each section. evangelion2100

    Read the article

  • CoreData and many NSArrayController

    - by unixo
    In my CoreData Application, I've an outline view on left of main window, acting as source list (like iTunes); on the right I display a proper view, based on outline selection. Each view has its components, such as table view, connected to array controller, owned by the specific view. Very often different views display same data, for example, a table view of the same entity. From a performance point of view, is better to have a single array controller per entity and share it between all views or does CoreData cache avoid memory waste?

    Read the article

  • Sql Server XML-type column duplicate entry detection

    - by aaaa bbbb
    In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages. I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert. Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc. Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.

    Read the article

  • Hibernate / MySQL Bulk insert problem

    - by Marty Pitt
    I'm having trouble getting Hibernate to perform a bulk insert on MySQL. I'm using Hibernate 3.3 and MySQL 5.1 At a high level, this is what's happening: @Transactional public Set<Long> doUpdate(Project project, IRepository externalSource) { List<IEntity> entities = externalSource.loadEntites(); buildEntities(entities, project); persistEntities(project); } public void persistEntities(Project project) { projectDAO.update(project); } This results in n log entries (1 for every row) as follows: Hibernate: insert into ProjectEntity (name, parent_id, path, project_id, state, type) values (?, ?, ?, ?, ?, ?) I'd like to see this get batched, so the update is more performant. It's possible that this routine could result in tens-of-thousands of rows generated, and a db trip per row is a killer. Why isn't this getting batched? (It's my understanding that batch inserts are supposed to be default where appropriate by hibernate).

    Read the article

  • R- delete rows in multiple columns by unique number

    - by Vincent Moriarty
    Given data like this C1<-c(3,-999.000,4,4,5) C2<-c(3,7,3,4,5) C3<-c(5,4,3,6,-999.000) DF<-data.frame(ID=c("A","B","C","D","E"),C1=C1,C2=C2,C3=C3) How do I go about removing the -999.000 data in all of the columns I know this works per column DF2<-DF[!(DF$C1==-999.000 | DF$C2==-999.000 | DF$C3==-999.000),] But I'd like to avoid referencing each column. I am thinking there is an easy way to reference all of the columns in a particular data frame aka: DF3<-DF[!(DF[,]==-999.000),] or DF3<-DF[!(DF[,(2:4)]==-999.000),] but obviously these do not work And out of curiosity, bonus points if you can me why I need that last comma before the ending square bracket as in: ==-999.000),]

    Read the article

  • recursive program

    - by wilson88
    I am trying to make a recursive program that calculates interest per year.It prompts the user for the startup amount (1000), the interest rate (10%)and number of years(1).(in brackets are samples) Manually I realised that the interest comes from the formula YT(1 + R)----- interest for the first year which is 1100. 2nd year YT(1 + R/2 + R2/2) //R squared 2nd year YT(1 + R/3 + R2/3 + 3R3/) // R cubed How do I write a recursive program that will calculate the interest? Below is the function which I tried //Latest after editing double calculateInterest2(double start, double rate, int duration) { if (0 == duration) { return start; } else { return (1+rate) * calculateInterest2(start, rate, duration - 1); } }

    Read the article

  • Cloud HUGE data storage options?

    - by ToughPal
    Hi, Does anyone have a good suggestion on how to do video recording? We have a camera that can record and then stream live video to a server. So this means we can have 1000's of cameras sending data 24X7 for recording. We will store data for over 7 / 14 / 30 days depending on the package. Per day if a camera is sending data to the server then it will store 1.5GB. So that means there is a traffic of 1.5GB / day / camera Total monthly 45GB / month / camera (Data + bandwidth for one camera) Please let me know the most cost effective way to get this data stored? Thanks!

    Read the article

  • MEMORY(HEAP) vs. InnoDB in a Read and Write Envirnment

    - by Johannes
    I want to programm a real-time application using MySQL. It needs a small table (less than 10000 rows) that will be under heavy read (scan) and write (update and some insert/delete) load. I am really speaking of 10000 updates or selects per second. These statements will be executed on only a few (less than 10) open mysql connections. The table is small and does not contain any data that needs to be stored on disk. So I ask which is faster: InnoDB or MEMORY (HEAP)? My thoughts are: Both enginges will probably serve SELECTs directly from memory, as even InnoDB will cache the whole table. What about the UPDATAEs? (innodb_flush_log_at_trx_commit?) My main concern is the locking behavior: InnoDB row lock vs. MEMORY table lock. Will this present the bottleneck in the MEMORY implementation? Thanks for your thoughts!

    Read the article

  • How do I count the number of bytes read by TextReader.ReadLine()?

    - by Steve Guidi
    I am parsing a very large file of records (one per line, each of varying length), and I'd like to keep track of the number of bytes I've read in the file so that I may recover in the event of a failure. I wrote the following: string record = myTextReader.ReadLine(); bytesRead += record.Length; ParseRecord(record); However this doesn't work since ReadLine() strips any CR/LF characters in the line. Furthermore, a line may be terminated by either CR, LF, or CRLF characters, which means I can't just add 1 to bytesRead. Is there an easy way to get the actual line length, or do I write my own ReadLine() method in terms of the granular Read() operations?

    Read the article

  • can rails send data to browser chunk by chunk?

    - by Nik
    Hello all, I have a very large dataset (100,000) to be display, but any browser I tried that on including chrome 5 dev, it make them choke for dozens of seconds (win7 64bit, 4gb, 256gb ssd, c2duo 2.4ghertz). I did a little experiment by some_controller.rb def show @data = (1..100000).to_a end show.html.erb <% @data.each do |d| % <%= d.to_s % <% end% as simple as that it chokes the browsers. I know browsers were never built for this, so I thought to let the data come in chunk by chunk, I guess 2000 per chunk is reasonable, but I wouldn't want to make 50 requests each time this view is called, any ideas? It doesn't have to be chunk by chunk if it can be sent all at once. Best,

    Read the article

  • Remote stream multiple files in SOLR

    - by Mark
    I want to use SOLR's remote-streaming facility to extract and index the content of files. This works fine if I pass stream.file=xxx as a parameter to the http GET method. However, I have a lot of these, and want to batch them up (i.e. not have to have a GET per file). Is there a way I can do this in SOLR? e.g. I'd like to be able to POST some xml like this: <add> <doc stream_file="filename"> <field name="id">123</field> </doc> <doc>...

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >