Search Results

Search found 60096 results on 2404 pages for 'data distribution'.

Page 395/2404 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Merge two lists of objects which hold different data

    - by C.McAtackney
    I have an object "Project" which has a number of fields, one of which is "Name". I have one spreadsheet of projects which contains some of these fields, and another which contains the rest. However, Both spreadsheets have the "Name" field. I've read them in and populated two List, only populating the available fields from that particular source. E.g. a Project from List 1 looks like; {Name="MyProj", Type="Form", Priority=NULL}, whereas a Project from List 2 {Name="MyProj", Type=NULL, Priority="High"}. Now, I want to merge these two lists into one in which each Project object has all of its fields populated, with the Name field being used to match the elements. How can I achieve this? Are there any nice ways of doing this concisely? Thanks

    Read the article

  • about Sorting Algorithms

    - by matin1234
    Hi I want to know that we always use Sorting algorithm like (Insertion Sort or Merge Sort,...) just for lists and arrays?? and we do not use these algorithms for stack or queue ??? thanks

    Read the article

  • Why increase pointer by two while finding loop in linked list, why not 3,4,5?

    - by GG
    I had a look at question already which talk about algorithm to find loop in a linked list. I have read Floyd's cycle-finding algorithm solution, mentioned at lot of places that we have to take two pointers. One pointer( slower/tortoise ) is increased by one and other pointer( faster/hare ) is increased by 2. When they are equal we find the loop and if faster pointer reaches null there is no loop in the linked list. Now my question is why we increase faster pointer by 2. Why not something else? Increasing by 2 is necessary or we can increase it by X to get the result. Is it necessary that we will find a loop if we increment faster pointer by 2 or there can be the case where we need to increment by 3 or 5 or x.

    Read the article

  • Altering ManagedObjects In NSArray

    - by Garry
    I have an entity called 'Job' with two boolean attributes named 'completed' and 'logged'. I am trying to retrieve all completed jobs that have not been logged at app start-up and change them to logged. I'm able to get all the completed but unlogged jobs with this fetchRequest: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"(completed == %@ && logged == %@)", [NSNumber numberWithBool:YES], [NSNumber numberWithBool:NO]]; I'm then assigning this predicate to a fetchRequest and calling the [managedObjectContext executeFetchRequest:fetchRequest] method to get an array of all Job entities that meet this criteria. This seems to work fine and is returning the correct number of jobs. What I've been trying to do is loop through the NSArray returned, set the logged attribute to YES and then save. This seems to complete and doesn't return any errors but the changes are not persisted when the application quits. Where am I going wrong? [fetchRequest setPredicate:predicate]; NSError error; NSArray jobsToLog = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; if ([jobsToLog count] > 0) { for (int i = 0; i < [jobsToLog count] - 1; i++) { [[jobsToLog objectAtIndex:i] setLogged:[NSNumber numberWithBool:YES]]; // Commit the changes made to disk error = nil; if (![managedObjectContext save:&error]) { // An error occurred } } } Thanks in anticipation,

    Read the article

  • rails large amount of data in single insert activerecord gave out

    - by Nik
    So I have I think around 36,000 just to be safe, a number I wouldn't think was too large for a modern sql database like mysql. Each record has just two attributes. So I do: so I collected them into one single insert statement sql = "INSERT INTO tasks (attrib_a, attrib_b) VALUES (c1,d1),(c2,d2),(c3,d3)...(c36000,d36000);" ActiveRecord::Base.connection.execute sql from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `log' from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute_without_analyzer from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from C:/Ruby/lib/ruby/1.8/benchmark.rb:308:in `realtime' from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from (irb):53 from C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/time_or_datetime.rb:242 I don't know if the above info is enough, please do ask for anything that I didn't provide here. So any idea what this is about? THANK YOU!!!!

    Read the article

  • algorithm - How to sort a 0/1 array with 2n/3 comparisons?

    - by Jackson Tale
    In Algorithm Design Manual, there is such an excise 4-26 Consider the problem of sorting a sequence of n 0’s and 1’s using comparisons. For each comparison of two values x and y, the algorithm learns which of x < y, x = y, or x y holds. (a) Give an algorithm to sort in n - 1 comparisons in the worst case. Show that your algorithm is optimal. (b) Give an algorithm to sort in 2n/3 comparisons in the average case (assuming each of the n inputs is 0 or 1 with equal probability). Show that your algorithm is optimal. For (a), I think it is fairly easy. I can choose a[n-1] as pivot, then do something like in quicksort partition, scan 0 to n - 2, find the middle point where left side is all 0 and right side is all 1, this take n - 1 comparisons. But for (b), I can't get a clue. It says "each of the n inputs is 0 or 1 with equal probability", so I guess I can assume the numbers of 0 and 1 equal? But how can I get a result which is related to 1/3? divide the whole array into 3 groups? Thanks

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

  • Creating tables and inserting data (MySQL Dump) using PHP Doctrine 1.2

    - by Dimitry
    Hello. I got a script that create a new database, now I need to fill that database with tables and values (from a MySQL dump file). I'm using PHP - Doctrine 1.2. Here is how I create the database: $manager = Doctrine_Manager::getInstance(); $newConn = $manager->openConnection($customer->Config->db_connection_string); $newConn->createDatabase(); How I do this? Thanks!

    Read the article

  • Help ! How do I get the total number rows from my mssql paging procedure ?

    - by The_AlienCoder
    Ok I have a table in my MSSQL database that stores comments. My desire is to be able to page though the records using [Back],[Next], page numbers & [Last] buttons in my datalist. I figured the most efficient way was to use a stored procedure that only returns a certain number of rows within a partcular range. Here is what I came up with @PageIndex INT, @PageSize INT, @postid int AS SET NOCOUNT ON begin WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) SELECT tmp.* FROM tmp WHERE Row between (@PageIndex - 1) * @PageSize + 1 and @PageIndex*@PageSize end RETURN Now everything works fine and I have been able implement [Next] and [Back] buttons in my datalist pager.Now I need the total number of all comments(not in the cuurent page) so that I can implement my page numbers and the[Last] button on my pager. In other words I want to return the total number of rows in my first select statement i.e WITH tmp AS ( SELECT comments.*, ROW_NUMBER() OVER (ORDER BY dateposted ASC) AS Row FROM comments WHERE (comments.postid = @postid)) set @TotalRows = @@rowcount @@rowcount doesnt work and raises an error.I also cant get count.* to work either. Is there another way to get the total amount of rows or is my approach doomed.

    Read the article

  • Count of products NOT sold...per store, per day over the past month

    - by user1893510
    I'm struggling with an interview question. 3 dimension tables (Product, Store and Date) and 1 fact table (Sales). The question asks for a T-SQL solution that will return the count of products not sold, per store, per day over the past month. At this point, my answer is futile but I've spent significant time trying to back into a solution, to no avail, and would like to close the loop. Any guidance is greatly appreciated.

    Read the article

  • Using a comma seperated data in MYSQL "IN" clause

    - by Sashi Kant
    I have a column in one of my table where I store multiple ids seperated by comma's. Is there a way in which I can use this column's value in the "IN" clause of a query. The column(city) has values like 6,7,8,16,21,2 I need to use as Select * from table where e_ID in (Select city from locations where e_Id=?) I am satisfied with Crozin's answer, but I am open to suggestions, views and options. Feel free to share your views.

    Read the article

  • Cannot access data in a xml string

    - by Jess McKenzie
    I am trying to create an array that I can include the perent category Name and the child category Name. I can so far access the below object fine using $xml->Subcategories; but if I try my code below I get an empty array why? PHP: foreach ($catDetailsXml->Category as $value) { $categoryDetails[] = array('CategoryNumber' => $value->Number); } Structure: object(SimpleXMLElement)#13 (1) { ["Category"]=> array(5) { [0]=> object(SimpleXMLElement)#14 (4) { ["Name"]=> string(19) "Commercial Property" ["Number"]=> string(10) "0350-0100-" ["Subcategories"]=> object(SimpleXMLElement)#35 (1) { ["Category"]=> array(3) { [0]=> object(SimpleXMLElement)#36 (4) { ["Name"]=> string(9) "Car parks" ["Number"]=> string(15) "0350-0100-8946-" ["Subcategories"]=> object(SimpleXMLElement)#39 (0) { } }

    Read the article

  • Serial numbers generation without user data

    - by Sphynx
    This is a followup to this question. The accepted answer is generally sufficient, but requires user to supply personal information (e.g. name) for generating the key. I'm wondering if it's possible to generate different keys based on a common seed, in a way that program would be able to validate if those keys belong to particular product, but without making this process obvious to the end user. I mean it could be a hash of product ID plus some random sequence of characters, but that would allow user to guess potential new keys. There should be some sort of algorithm difficult to guess.

    Read the article

  • implementation of a queue using a circular array

    - by matin1234
    Hi I have found these algorithms in the internet but I can not understand that why in the enqueue method we compare size with N-1??? please help me thanks!! Algorithm size(): return (N-f+r)mod N Algorithm enqueue(e): if size()=N-1 then throw a FullQueueException Q[r]<---e r<----(r+1)mod N

    Read the article

  • how to sort xml data in jQuery

    - by pixeltocode
    how can i sort all officers based on their ranks jQuery $.get('officers.xml', function(grade){ $(grade).find('officer').each(function(){ var $rank = $(this).attr('rank'); }); }); XML (officer.xml) <grade> <officer rank="2"></student> <officer rank="3"></student> <officer rank="1"></student> </grade> thanks.

    Read the article

  • Check for live Data Source Name Before proceeding

    - by n_kips
    Would it be ok to get a CF app to check for a valid database before proceeding to process that request? This is because there may be instances where the database server may be down or being upgraded, hence an error comes when a db dependant request is made. If there is no connection to the db server, the user can be safely redirected to a safe page. Or can cfcatch work? How can this check be done? Thank you.

    Read the article

  • Return XML data from a web service

    - by Nick LaMarca
    What is the best way to create a web service that returns a set of x,y coordinates? I am not sure on the object that is the best return type. When consuming the service I want to have it come back as xml preferibly something like this for example: <TheData> <Point> <x>0</x> <y>2</y> </Point> <Point> <x>5</x> <y>3</y> </Point> </TheData> If someone has a better structure to return please help I am new at all this.

    Read the article

  • How do I output the preorder traversal of a tree given the inorder and postorder tranversal?

    - by user342580
    Given the code for outputing the postorder traversal of a tree when I have the preorder and the inorder traversal in an interger array. How do I similarily get the preorder with the inorder and postorder array given? void postorder( int preorder[], int prestart, int inorder[], int inostart, int length) { if(length==0) return; //terminating condition int i; for(i=inostart; i<inostart+length; i++) if(preorder[prestart]==inorder[i])//break when found root in inorder array break; postorder(preorder, prestart+1, inorder, inostart, i-inostart); postorder(preorder, prestart+i-inostart+1, inorder, i+1, length-i+inostart-1); cout<<preorder[prestart]<<" "; } Here is the prototype for preorder() void preorder( int inorderorder[], int inostart, int postorder[], int poststart, int length)

    Read the article

  • Nested Data XML design

    - by esryl
    Looking to nest (to unlimited levels) elements in XML. Like so: <items> <item> <name>Item One</name> <item> <name>Item Two</name> </item> <item> <name>Item Three</name> <item> <name>Item Four</name> </item> <!-- etc... --> </item> </item> </items> However. While browsing for a solution I noticed in the comments of: http://stackoverflow.com/questions/988139/weird-nesting-in-xml while the above is well formed it would not validate against any sinsible DTD. Two things, what is a better way of nesting similar elements, and secondly what would be the design of the DTD. UPDATE: Would prefer to validate against an XML Schema rather than DTD.

    Read the article

  • vs2010 Cache SQL data incorrect fields

    - by mickartz
    OK, I found a walkthrough on msdn for what I was after (offline database cache). However when I let the wizard create a local database from my online sql server the timespan fields are converted to a string?? Now I know the suggestion was to create my own local database and then use the MS Synch framework...however...this proclaims to do it "out of the box" However now I've a dataset which I've no idea how to use, and a database newly formed (for the synched cache) taht I will have to use Ling to Entities with(??) meanwhile I have this weird timespan to string conversion? should I give up now or push on? can i overwrite the the .designer.cs? typeof(string) to typeof(timespan)? damn wizards!!

    Read the article

  • Customising log4j logging for sensitive data

    - by Xetius
    I have a class which contains sensitive information (Credit card info, phone numbers etc). I want to be able to pass this class to log4j, but have it obscure certain information. If I have a class UserInformation which has getPhoneNumber, getCreditCardNumber methods, how would I customise log4j or this class so that it will obscure the numbers correctly. I want the credit card number to be output as xxxx-xxxx-xxxx-1234 and the phone number to be output as xxxx-xxx-xxx given that these would be 1234-1234-1234-1234 and 1234-567-890 Thanks

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >