Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 230/266 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • Append json data to html class name

    - by user2898514
    I have a problem with my json code. I want to append each json value comes from a key to be appended to an html class name which matches the key of json data. here's my Live demo if you see the result in the life demo. it's only appending the last record. is it possible to make it show all records in order? json var json = '[{"castle":"big","commercial":"large","common":"sergio","cultural":"2009"},' + '{"castle":"big2","commercial":"large2","common":"sergio2","cultural":"20092"}]'; html <div class="castle"></div> <div class="commercial"></div> <div class="common"></div> <div class="cultural"></div> javascript var data = $.parseJSON(json); $.each(data, function(l,v) { $.each(v, function(k,o) { $('.'+k).attr('id', k+o); console.log($('#'+k+o).attr('id')); $('#'+k+o).text(o); }); }); for more illustration... I want the result in the live demo to look like this big large sergio 2009, big2 large2 sergio2 20092

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • MySQL Need some help with a query

    - by Jules
    I'm trying to fix some data by adding a new field. I have a backup from a few months ago and I have restored this database to my server. I'm looking at table called pads, its primary key is PadID and the field of importance is called RemoveMeDate. In my restored (older) database there is less records with an actual date set in RemoveMeDate. My control date is 2001-01-01 00:00:00 meaning that the record is not hidden aka visible. What I need to do is select all the records from the older database / table with the control date and join with those from the newer db /table where the control date is not set. I hope I've explained that correctly. I'll try again, with numbers. I have 80,000 visible records in the older table (with control date set) and 30,000 in the newer db/table. I need to select the 50,000 from the old database, to perform an update query. Heres my query, which I'd can't get to work as I'd like. jules-fix-reasons is the old database, jules is the newer one. select p.padid from `jules-fix-reasons`.`pads` p JOIN `jules`.`pads` ON p.padid = `jules`.`pads`.`PadID` where p.RemoveMeDate <> '2001-01-01 00:00:00' AND `jules`.`pads`.RemoveMeDate = '2001-01-01 00:00:00'

    Read the article

  • Chinese records of sqlite databse are in not accessable in iphone app

    - by Mas
    Hi! I'm creating an iphone app. In which I'm reading data from the sqlite db and presenting it in the tableview control. Problem is that the data is in chinese language. Due to unknown reason, when I read/fetch record from the sqlite table, some records are presented and other are missed by objc. Even objc reads some columns of the missing values, But returns the nil results of the some columns. In reality these columns are not empty. I tried many solutions but didn't find any suitable. Same database is working perfectly in the android version of the app. Any one can help here is the code of reading the chinese records from table Quote *q = [[Quote alloc] init]; q.catId = sqlite3_column_int(statement, 0); q.subCatId = sqlite3_column_int(statement, 1); q.quote = [NSString stringWithUTF8String:(char *)sqlite3_column_text(statement, 2)]; and here is the some records which iphone misses ??????·??? ??????,????????! ??????,??????? Thanks

    Read the article

  • Need little assistance

    - by Umaid
    I am iterating in current days, so need little assistance for (int I=-1; I<30; I++) { for (int J=0; J=30; J++) { for (int K=1; K=30; K++) { SELECT rowid,Month, Day, Advice from MainCategory where Month= 'May ' and Day in ((cast(strftime('%d',date('now','I day')) as Integer)),(cast(strftime('%d',date('now','J day')) as Integer)),(cast(strftime('%d',date('now','K day')) as Integer))); } } } What if i want to go in reverse order also for (int I=-1; I<30; I--) { for (int J=0; J=30; J--) { for (int K=1; K=30; K--) { SELECT rowid,Month, Day, Advice from MainCategory where Month= 'May ' and Day in ((cast(strftime('%d',date('now','I day')) as Integer)),(cast(strftime('%d',date('now','J day')) as Integer)),(cast(strftime('%d',date('now','K day')) as Integer))); } } } On every previous click, i want to fetch 3 records so do i need to iterate till 3 or make it on all record 30 in a month from which i want to fetch.

    Read the article

  • Can this extension method be improved?

    - by Newbie
    I have the following extension method public static class ListExtensions { public static IEnumerable<T> Search<T>(this ICollection<T> collection, string stringToSearch) { foreach (T t in collection) { Type k = t.GetType(); PropertyInfo pi = k.GetProperty("Name"); if (pi.GetValue(t, null).Equals(stringToSearch)) { yield return t; } } } } What it does is by using reflection, it finds the name property and then filteres the record from the collection based on the matching string. This method is being called as List<FactorClass> listFC = new List<FactorClass>(); listFC.Add(new FactorClass { Name = "BKP", FactorValue="Book to price",IsGlobal =false }); listFC.Add(new FactorClass { Name = "YLD", FactorValue = "Dividend yield", IsGlobal = false }); listFC.Add(new FactorClass { Name = "EPM", FactorValue = "emp", IsGlobal = false }); listFC.Add(new FactorClass { Name = "SE", FactorValue = "something else", IsGlobal = false }); List<FactorClass> listFC1 = listFC.Search("BKP").ToList(); It is working fine. But a closer look into the extension method will reveal that Type k = t.GetType(); PropertyInfo pi = k.GetProperty("Name"); is actually inside a foreach loop which is actually not needed. I think we can take it outside the loop. But how? PLease help. (C#3.0)

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • Deal with undefined values in code or in the template?

    - by David
    I'm writing a web application (in Python, not that it matters). One of the features is that people can leave comments on things. I have a class for comments, basically like so: class Comment: user = ... # other stuff where user is an instance of another class, class User: name = ... # other stuff And of course in my template, I have <div>${comment.user.name}</div> Problem: Let's say I allow people to post comments anonymously. In that case comment.user is None (undefined), and of course accessing comment.user.name is going to raise an error. What's the best way to deal with that? I see three possibilities: Use a conditional in the template to test for that case and display something different. This is the most versatile solution, since I can change the way anonymous comments are displayed to, say, "Posted anonymously" (instead of "Posted by ..."), but I've often been told that templates should be mindless display machines and not include logic like that. Also, other people might wind up writing alternate templates for the same application, and I feel like I should be making things as easy as possible for the template writer. Implement an accessor method for the user property of a Comment that returns a dummy user object when the real user is undefined. This dummy object would have user.name = 'Anonymous' or something like that and so the template could access it and print its name with no error. Put an actual record in my database corresponding to a user with user.name = Anonymous (or something like that), and just assign that user to any comment posted when nobody's logged in. I know I've seen some real-world systems that operate this way. (phpBB?) Is there a prevailing wisdom among people who write these sorts of systems about which of these (or some other solution) is the best? Any pitfalls I should watch out for if I go one way vs. another? Whoever gives the best explanation gets the checkmark.

    Read the article

  • Help with SQL Server query

    - by Travis
    Sorry* this is what I should have put My query is creating duplicate entries for any record that has more than 1 instance (regardless of date) <asp:SqlDataSource ID="EastMonthlyHealthDS" runat="server" ConnectionString="<%$ ConnectionStrings:SNA_TRTTestConnectionString %>" SelectCommand="SELECT [SNA_Parent_Accounts].[Company], (SELECT [Monthly_HIP_Reports].[AccountHealth] from [Monthly_HIP_Reports] where ([Monthly_HIP_Reports].[YearMonth] = @ToDtRFC) AND ([SNA_Parent_Accounts].[CompID] = [Monthly_HIP_Reports].[CompID])) as [AccountHealth], [SNA_Parent_Accounts].[CompID] FROM [SNA_Parent_Accounts] LEFT OUTER JOIN [Monthly_HIP_Reports] ON [Monthly_HIP_Reports].[CompID] = [SNA_Parent_Accounts].[CompID] WHERE (([SNA_Parent_Accounts].[Classification] = 'Business') OR ([SNA_Parent_Accounts].[Classification] = 'Business Ihn')) AND ([SNA_Parent_Accounts].[Status] = 'active') AND ([SNA_Parent_Accounts].[Region] = 'east') ORDER BY [SNA_Parent_Accounts].[Company]"> <SelectParameters> <asp:ControlParameter ControlID="ddMonths" Name="ToDtRFC" PropertyName="Text" Type="String" /> </SelectParameters> </asp:SqlDataSource> Using SELECT DISTINCT appears to correct the problem, but I don't consider that a solution. There are no duplicate entries in the database. So it appears my query is superfically creating duplicates. The query should grab a list of companies that meet the criteria in the where clause, but also grab the Health status for each company in that particular [YearMonth] if present which is what the subquery is for. If an entry for that YearMonth is not present, then leave the Health status blank. but as stated earlier.. if you have an entry say for 2009-03 for CompID 2 and an entry for 2009-04 for CompID 2.. Doesn't matter what month you select it will list that company 2-3 times.

    Read the article

  • NHibernate unintential lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_ NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470 NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Rails database relationships

    - by Danny McClelland
    Hi Everyone, I have three models that I want to interact with each other. Kase, Person and and Company. I have (I think) setup the relationships correctly: class Kase < ActiveRecord::Base #HAS ONE COMPANY has_one :company #HAS MANY PERSONS has_many :persons class Person < ActiveRecord::Base belongs_to :company class Company < ActiveRecord::Base has_many :persons def to_s; companyname; end I have put the select field on the create new Kase view, and the create new Person view as follows: <li>Company<span><%= f.select :company_id, Company.all %> </span></li> All of the above successfully shows a drop down menu dynamically populated with the company names within Companies. What I am trying to do is display the contact of the Company record within the kase and person show.html.erb. For example, If I have a company called "Acme, Inc." and create a new Kase called "Random Case" and choose within the create new case page "Acme, Inc." from the companies drop down menu. I would then want to display "Acme, Inc" along with "Acme, Inc. Mobile" etc. on the "Random Case" show.html.erb. I hope this makes sense to somebody! Thanks, Danny

    Read the article

  • Doing without partial commits the "Mercurial way"

    - by David Moles
    Subversion shop considering switching to Mercurial, trying to figure out in advance what all the complaints from developers are going to be. There's one fairly common use case here that I can't see how to handle. I'm working on some largish feature, and I have a significant part of the code -- or possibly several significant parts of the code -- in pieces all over the garage floor, totally unsuitable for checkin, maybe not even compiling. An urgent bugfix request comes in. The fix is nice and local and doesn't touch any of the code I've been working on. I make the fix in my working copy. Now what? I've looked at "Mercurial cherry picking changes for commit" and "best practices in mercurial: branch vs. clone, and partial merges?" and all the suggestions seem to be extensions of varying complexity, from Record and Shelve to Queues. The fact that there apparently isn't any core functionality for this makes me suspect that in some sense this working style is Doing It Wrong. What would a Mercurial-like solution to this use case look like?

    Read the article

  • How to get around DnsRecordListFree error in .NET Framework 4.0?

    - by Greg Finzer
    I am doing an MxRecordLookup. I am getting an error when calling the DnsRecordListFree in the .NET Framework 4.0. I am using Windows 7. How do I get around it? Here is the error: System.MethodAccessException: Attempt by security transparent method to call native code through method. Here is my code: [DllImport("dnsapi", EntryPoint = "DnsQuery_W", CharSet = CharSet.Unicode, SetLastError = true, ExactSpelling = true)] private static extern int DnsQuery([MarshalAs(UnmanagedType.VBByRefStr)]ref string pszName, QueryTypes wType, QueryOptions options, int aipServers, ref IntPtr ppQueryResults, int pReserved); [DllImport("dnsapi", CharSet = CharSet.Auto, SetLastError = true)] private static extern void DnsRecordListFree(IntPtr pRecordList, int FreeType); public List<string> GetMXRecords(string domain) { List<string> records = new List<string>(); IntPtr ptr1 = IntPtr.Zero; IntPtr ptr2 = IntPtr.Zero; MXRecord recMx; try { int result = DnsQuery(ref domain, QueryTypes.DNS_TYPE_MX, QueryOptions.DNS_QUERY_BYPASS_CACHE, 0, ref ptr1, 0); if (result != 0) { if (result == 9003) { //No Record Exists } else { //Some other error } } for (ptr2 = ptr1; !ptr2.Equals(IntPtr.Zero); ptr2 = recMx.pNext) { recMx = (MXRecord)Marshal.PtrToStructure(ptr2, typeof(MXRecord)); if (recMx.wType == 15) { records.Add(Marshal.PtrToStringAuto(recMx.pNameExchange)); } } } finally { DnsRecordListFree(ptr1, 0); } return records; }

    Read the article

  • Jquery Find an XML element based on the value of one of it's children

    - by NateD
    I'm working on a simple XML phonebook app to learn JQuery, and I can't figure out how to do something like this: When the user enters the first name of a contact in a textbox I want to find the entire record of that person. The XML looks like this: <phonebook> <person> <number> 555-5555</number> <first_name>Evelyn</first_name> <last_name>Remington</last_name> <address>Edge of the Abyss</address>  </person> <person> <number>+34 1 6444 333 2223230</number> <first_name>Max</first_name> <last_name>Muscle</last_name> <address>Mining Belt</address>  </person> </phonebook> and the best I've been able to do with the jQuery is something like this: var myXML; function searchXML(){ $.ajax({ type:"GET", url: "phonebook.xml", dataType: "xml", success: function(xml){myXML = $("xml").find("#firstNameBox").val())} }); } What I want it to do is return the entire <person> element so I can iterate through and display all that person's information. Any help would be appreciated.

    Read the article

  • PHP/Javascript limiting amount of checkboxes

    - by Carl294
    Hi everyone Im trying to limit the amount of checkboxes that can be checked, in this case only 3 can be checked. When using plain HTML this works fine. The code can be seen below. HTML example <td ><input type=checkbox name=ckb value=2 onclick='chkcontrol()';></td><td>Perl</td> Javascript Function <script type="text/javascript"> function chkcontrol(j) { var total=0; for(var i=0; i < document.form1.ckb.length; i++){ if(document.form1.ckb[i].checked){ total =total +1;} if(total > 3){ alert("Please Select only three") document.form1.ckb[j].checked = false; return false; } } } </script> The problem appears when replacing the fixed HTML values with values from a MYSQL database. All the information appears correctly, and can be posted to another page via a submit button. However, it seems like the 'value' assigned to each record from the database is not making its way too the javascript function. <td><input name="checkbox[]" type="checkbox" value="<?php echo $rows['TCA_QID'];?>" onclick="chkcontrol();"></td> I have tried changed the name in the javascript function to match the 'checkbox' name.Any advice would be greatly appreciated Thanks

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Improving I/O performance in C++ programs[external merge sort]

    - by Ajay
    I am currently working on a project involving external merge-sort using replacement-selection and k-way merge. I have implemented the project in C++[runs on linux]. Its very simple and right now deals with only fixed sized records. For reading & writing I use (i/o)fstream classes. After executing the program for few iterations, I noticed that I/O read blocks for requests of size more than 4K(typical block size). Infact giving buffer sizes greater than 4K causes performance to decrease. The output operations does not seem to need buffering, linux seemed to take care of buffering output. So I issue a write(record) instead of maintaining special buffer of writes and then flushing them out at once using write(records[]). But the performance of the application does not seem to be great. How could I improve the performance? Should I maintain special I/O threads to take care of reading blocks or are there existing C++ classes providing this abstraction already?(Something like BufferedInputStream in java)

    Read the article

  • Adding defaults and indexes to a script/generate command in a Rails Template?

    - by charliepark
    I'm trying to set up a Rails Template that would allow for comprehensive set-up of a specific Rails app. Using Pratik Naik's overview (http://m.onkey.org/2008/12/4/rails-templates), I was able to set up a couple of scaffolds and models, with a line that looks something like this ... generate("scaffold", "post", "title:string", "body:string") I'm now trying to add in Delayed Jobs, which normally has a migration file that looks like this: create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually. table.text :handler # YAML-encoded string of the object that will do work table.text :last_error # reason for last failure (See Note below) table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future. table.datetime :locked_at # Set when a client is working on this object table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead) table.string :locked_by # Who is working on this object (if locked) table.timestamps end So, what I'm trying to do with the Rails template, is to add in that :default = 0 into the master template file. I know that the rest of the template's command should look like this: generate("migration", "createDelayedJobs", "priority:integer", "attempts:integer", "handler:text", "last_error:text", "run_at:datetime", "locked_at:datetime", "failed_at:datetime", "locked_by:string") Where would I put (or, rather, what is the syntax to add) the :default values in that? And if I wanted to add an index, what's the best way to do that?

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

  • OO model for nsxmlparser when delegate is not self

    - by richard
    Hi, I am struggling with the correct design for the delegates of nsxmlparser. In order to build my table of Foos, I need to make two types of webservice calls; one for the whole table and one for each row. It's essentially a master-query then detail-query, except the master-query-result-xml doesn't return enough information so i then need to query the detail for each row. I'm not dealing with enormous amounts of data. Anyway - previously I've just used NSXMLParser *parser = [[NSXMLParser alloc]init]; [parser setDelegate:self]; [parser parse]; and implemented all the appropriate delegate methods in whatever class i'm in. In attempt at cleanliness, I've now created two separate delegate classes and done something like: NSXMLParser *xp = [[NSXMLParser alloc]init]; MyMasterXMLParserDelegate *masterParserDelegate = [[MyMasterXMLParser]alloc]init]; [xp setDelegate:masterParserDelegate]; [xp parse]; In addition to being cleaner (in my opinion, at least), it also means each of the -parser:didStartElement implementations don't spend most of the time trying to figure out which xml they're parsing. So now the real crux of the problem. Before i split out the delegates, i had in the main class that was also implementing the delegate methods, a class-level NSMutableArray that I would just put my objects-created-from-xml in when -parser:didEndElement found the 'end' of each record. Now the delegates are in separate classes, I can't figure out how to have the -parser:didEndElement in the 'detail' delegate class "return" the created object to the calling class. At least, not in a clean OO way. I'm sure i could do it with all sorts of nasty class methods. Does the question make sense? Thanks.

    Read the article

  • How do I serialise a graph in Java without getting StackOverflowException?

    - by Tim Cooper
    I have a graph structure in java, ("graph" as in "edges and nodes") and I'm attempting to serialise it. However, I get "StackOverflowException", despite significantly increasing the JVM stack size. I did some googling, and apparently this is a well known limitation of java serialisation: that it doesn't work for deeply nested object graphs such as long linked lists - it uses a stack record for each link in the chain, and it doesn't do anything clever such as a breadth-first traversal, and therefore you very quickly get a stack overflow. The recommended solution is to customise the serialisation code by overriding readObject() and writeObject(), however this seems a little complex to me. (It may or may not be relevant, but I'm storing a bunch of fields on each edge in the graph so I have a class JuNode which contains a member ArrayList<JuEdge> links;, i.e. there are 2 classes involved, rather than plain object references from one node to another. It shouldn't matter for the purposes of the question). My question is threefold: (a) why don't the implementors of Java rectify this limitation or are they already working on it? (I can't believe I'm the first person to ever want to serialise a graph in java) (b) is there a better way? Is there some drop-in alternative to the default serialisation classes that does it in a cleverer way? (c) if my best option is to get my hands dirty with low-level code, does someone have an example of graph serialisation java source-code that can use to learn how to do it?

    Read the article

  • Summarising grouped records in a dataframe in R (...again)

    - by monch1962
    Hello all, (I tried to ask this question earlier today, but later realised I over-simplified the question; the answers I received were correct, but I couldn't use them because of my over-simplification of the problem in the original question. Here's my 2nd attempt...) I have a data frame in R that looks like: "Timestamp", "Source", "Target", "Length", "Content" 0.1 , P1 , P2 , 5 , "ABCDE" 0.2 , P1 , P2 , 3 , "HIJ" 0.4 , P1 , P2 , 4 , "PQRS" 0.5 , P2 , P1 , 2 , "ZY" 0.9 , P2 , P1 , 4 , "SRQP" 1.1 , P1 , P2 , 1 , "B" 1.6 , P1 , P2 , 3 , "DEF" 2.0 , P2 , P1 , 3 , "IJK" ... and I want to convert this to: "StartTime", "EndTime", "Duration", "Source", "Target", "Length", "Content" 0.1 , 0.4 , 0.3 , P1 , P2 , 12 , "ABCDEHIJPQRS" 0.5 , 0.9 , 0.4 , P2 , P1 , 6 , "ZYSRQP" 1.1 , 1.6 , 0.5 , P1 , P2 , 4 , "BDEF" ... Trying to put this into English, I want to group consecutive records with the same 'Source' and 'Target' together, then print out a single record per group showing the StartTime, EndTime & Duration (=EndTime-StartTime) for that group, along with the sum of the Lengths for that group, and a concatenation of the Content (which will all be strings) in that group. The TimeOffset values will always increase throughout the data frame. I had a look at melt/recast and have a feeling that it could be used to solve the problem, but couldn't get my head around the documentation. I suspect it's possible to do this within R, but I really don't know where to start. In a pinch I could export the data frame out and do it in e.g. Python, but I'd prefer to stay within R if possible. Thanks in advance for any assistance you can provide

    Read the article

  • Best way to use PL/SQL Pacakge Cursors from Pro*C

    - by Greg Reynolds
    I have a cursor defined in PL/SQL, and I am wondering what the best way to use it from Pro*C is. Normally for a cursor defined in Pro*C you would do: EXEC SQL DECLARE curs CURSOR FOR SELECT 1 FROM DUAL; EXEC SQL OPEN curs; EXEC SQL FETCH curs INTO :foo; EXEC SQL CLOSE cusr; I was hoping that the same (or similar) syntax would work for a packaged cursor. For example, I have a package MyPack, with a declaration type MyType is record (X integer); cursor MyCurs(x in integer) return MyType; Now I have in my Pro*C code a rather unsatisfying piece of embedded PL/SQL that opens the cursor, does the fetching etc., as I couldn't get the first style of syntax to work. Using the example EXEC SQL EXECUTE DECLARE XTable is table of MyPack.MyType; BEGIN OPEN MyPack.MyCurs(:param); FETCH MyPack.MyCurs INTO XTable; CLOSE MyPack.MyCurs; END; END-EXEC; Does anyone know if there is a more "Pure" Pro*C approach?

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Client-server synchronization pattern / algorithm?

    - by tm_lv
    I have a feeling that there must be client-server synchronization patterns out there. But i totally failed to google up one. Situation is quite simple - server is the central node, that multiple clients connect to and manipulate same data. Data can be split in atoms, in case of conflict, whatever is on server, has priority (to avoid getting user into conflict solving). Partial synchronization is preferred due to potentially large amounts of data. Are there any patterns / good practices for such situation, or if you don't know of any - what would be your approach? Below is how i now think to solve it: Parallel to data, a modification journal will be held, having all transactions timestamped. When client connects, it receives all changes since last check, in consolidated form (server goes through lists and removes additions that are followed by deletions, merges updates for each atom, etc.). Et voila, we are up to date. Alternative would be keeping modification date for each record, and instead of performing data deletes, just mark them as deleted. Any thoughts?

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >