Search Results

Search found 6772 results on 271 pages for 'dave child'.

Page 216/271 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • Cascading updates with business key equality: Hibernate best practices?

    - by Traphicone
    I'm new to Hibernate, and while there are literally tons of examples to look at, there seems to be so much flexibility here that it's sometimes very hard to narrow all the options down the best way of doing things. I've been working on a project for a little while now, and despite reading through a lot of books, articles, and forums, I'm still left with a bit of a head scratcher. Any veteran advice would be very appreciated. So, I have a model involving two classes with a one-to-many relationship from parent to child. Each class has a surrogate primary key and a uniquely constrained composite business key. <class name="Container"> <id name="id" type="java.lang.Long"> <generator class="identity"/> </id> <properties name="containerBusinessKey" unique="true" update="false"> <property name="name" not-null="true"/> <property name="owner" not-null="true"/> </properties> <set name="items" inverse="true" cascade="all-delete-orphan"> <key column="container" not-null="true"/> <one-to-many class="Item"/> </set> </class> <class name="Item"> <id name="id" type="java.lang.Long"> <generator class="identity"/> </id> <properties name="itemBusinessKey" unique="true" update="false"> <property name="type" not-null="true"/> <property name="color" not-null="true"/> </properties> <many-to-one name="container" not-null="true" update="false" class="Container"/> </class> The beans behind these mappings are as boring as you can possibly imagine--nothing fancy going on. With that in mind, consider the following code: Container c = new Container("Things", "Me"); c.addItem(new Item("String", "Blue")); c.addItem(new Item("Wax", "Red")); Transaction t = session.beginTransaction(); session.saveOrUpdate(c); t.commit(); Everything works fine the first time, and both the Container and its Items are persisted. If the above code block is executed again, however, Hibernate throws a ConstraintViolationException--duplicate values for the "name" and "owner" columns. Because the new Container instance has a null identifier, Hibernate assumes it is an unsaved transient instance. This is expected but not desired. Since the persistent and transient Container objects have the same business key values, what we really want is to issue an update. It is easy enough to convince Hibernate that our new Container instance is the same as our old one. With a quick query we can get the identifier of the Container we'd like to update, and set our transient object's identifier to match. Container c = new Container("Things", "Me"); c.addItem(new Item("String", "Blue")); c.addItem(new Item("Wax", "Red")); Query query = session.createSQLQuery("SELECT id FROM Container" + "WHERE name = ? AND owner = ?"); query.setString(0, c.getName()); query.setString(1, c.getOwner()); BigInteger id = (BigInteger)query.uniqueResult(); if (id != null) { c.setId(id.longValue()); } Transaction t = session.beginTransaction(); session.saveOrUpdate(c); t.commit(); This almost satisfies Hibernate, but because the one-to-many relationship from Container to Item cascades, the same ConstraintViolationException is also thrown for the child Item objects. My question is: what is the best practice in this situation? It is highly recommended to use surrogate primary keys, and it is also recommended to use business key equality. When you put these two recommendations in to practice together, however, two of the greatest conveniences of Hibernate--saveOrUpdate and cascading operations--seem to be rendered almost completely useless. As I see it, I have only two options: Manually fetch and set the identifier for each object in the mapping. This clearly works, but for even a moderately sized schema this is a lot of extra work which it seems Hibernate could easily be doing. Write a custom interceptor to fetch and set object identifiers on each operation. This looks cleaner than the first option but is rather heavy-handed, and it seems wrong to me that you should be expected to write a plug-in which overrides Hibernate's default behavior for a mapping which follows the recommended design. Is there a better way? Am I making completely the wrong assumptions? I'm hoping that I'm just missing something. Thanks.

    Read the article

  • How do I test database-related code with NUnit?

    - by Michael Haren
    I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx http://haacked.com/archive/2005/12/28/11377.aspx These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... Does this functionality exists somewhere in NUnit natively? Has this technique been improved upon in the last 4 years? Is this still the best way to test database-related code? Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

    Read the article

  • JSF Render response programatically

    - by Shamik
    I have one parent page with a parentManagedBean (attached to Session Scope). On click of a button on this parent page, one popup comes which has a childManagedBean (attached to Request scope). Now ChildManagedBedan holds a reference to parentManaged bean thru JSFs managed Property feature. On this popup window, user selects some option which populates a large value object class. I use the managed property of childMnaagedBean to set the values from this large object to that of parentmanagedbean. Problem is - The parent page shows a link, on click of which a popup comes, on selection of the popup, the popup disappears and set the values to the parentManaged bean.So far so good, but the newly set values need to appear on the parent page. This is where I am stuck. How to programatically render the master page/render page when I am at the child managed bean... is there a way I can get handle of the parent page and refresh it ?

    Read the article

  • How to create a server control on another ASPX file

    - by salvationishere
    I am developing a C#/SQL ASP.NET web application in VS 2008. Currently, I am transferring control from one ASPX file to another: if (uploadFile.PostedFile.ContentLength > 0) { inputfile = System.IO.File.ReadAllText(path); Context.Items["Message"] = inputfile; //Page1 Server.Transfer("DataMatch.aspx"); //Page1 } However, it fails on this Server.Transfer line after inserting runat="server" in the DataMatch.aspx file to the Table element like so: <table width="50%" id="tMain" runat="server"> But after making this a server control, I rebuilt it and now when I run this app it gives me exception: Error executing child request for DataMatch.aspx But I need this table to be a server control so I can make it invisible programmatically if a certain condition occurs. How else can I programmatically make this table invisible?

    Read the article

  • Do not show partial items in a WPF listbox

    - by David Martin
    I've tried Google and I've tried Bing to no avail. Does anyone here have an idea on how to prevent partial items from appearing in a listbox in WPF? In case that does not make sense here is an example: Listbox is 200 pixels tall - each item is 35 pixels tall. That means I can show 5.7 items. 7/10 of an item is undesirable. I'd like to limit it to showing only 5 items. The user could then scroll to see the additional items. Should I A) try to dynamically size the listbox or ScrollViewer ViewPort so that it fits perfectly? Or B) implement a custom panel that would not arrange a child whose desired height is more than the remaining vertical space? Any thoughts would be greatly appreciated. Last note: If anyone knows of a 3rd party control (listbox or grid) that does this I would be interested in that as well.

    Read the article

  • Get the XPath to an XElement?

    - by Chris
    I've got an XElement deep within a document. Given the XElement (and XDocument?), is there an extension method to get its full (i.e. absolute, e.g. /root/item/element/child) XPath? E.g. myXElement.GetXPath()? EDIT: Okay, looks like I overlooked something very important. Whoops! The index of the element needs to be taken into account. See my last answer for the proposed corrected solution.

    Read the article

  • Help with Btree homework

    - by Phenom
    I need to do a preorder traversal of a Btree, and among other things, print the following information for each page (which is the same thing as a node): The B-Tree page number The value of each B-Tree page pointer (e.g., address, byte offset, RRN). My questions are: 1. How do you figure out the byte offset? What is it offset from? 2. Isn't the RRN the same as the page number? Note: A Btree is NOT A BINARY TREE. Btrees can have multiple keys in each node, and a node with n keys has n+1 child pointers.

    Read the article

  • Is a signal sent with kill to a parent thread guaranteed to be processed before the next statement?

    - by Jonathan M Davis
    Okay, so if I'm running in a child thread on linux (using pthreads if that matters), and I run the following command kill(getpid(), someSignal); it will send the given signal to the parent of the current thread. My question: Is it guaranteed that the parent will then immediately get the CPU and process the signal (killing the app if it's a SIGKILL or doing whatever else if it's some other signal) before the statement following kill() is run? Or is it possible - even probable - that whatever command follows kill() will run before the signal is processed by the parent thread?

    Read the article

  • How to check if a variable is defined in a Master file in ASP.NET MVC

    - by Mortanis
    I've got a Site.Master file I've created to be my template for the majority of the site, with a navigation. This navigation is dynamically created, based on a recursive Entity (called Page) - Pages with a parentID of 0 are top level, and naturally each child carries it's parent's Id in that field. I've created a quick little HTML Helper that accepts the ID of an Page and generates the nav by doing a foreach on the children that have a parentId matching the passed Id. On the majority of the site, I want the Site.Master to use a parentId of 0, but if I'm on a strongly typed View displaying a Page, I naturally want to use the Id of the page. Is there a way to do such conditional logic in a Site.Master (and, does that violate MVC rules)? "If I'm on a strongly typed Page of /Page/{Id}, use the Id render nav, else use 0"

    Read the article

  • mySql table returning JSON that needs formatting for a iPhone UITableview

    - by Michael Robinson
    I have a php query the returns the following JSON format from a table. [{"memberid":"18", "useridFK":"30", "loginName":"Johnson", "name":"Frank", "age":"23", "place":"School", }, It needs the following format: [{"memberid":"18" { "useridFK":"30", "loginName":"Johnson", "name":"Frank", "age":"23", "place":"School",} }, I can figure out where/how to convert this, Where would I create the formatting following: (1) In the php return? (2) the JSON instructions for deserialization? or (3) Some kinb of Obj-C coding instruction? My end use is a simple Drill Down table using the NSObject, so when I select "memberid" row, I'll get the child/detail list on the next UITableview. My Data.plist will look like the following: Root: Dictionary V Rows: Array V Item 0: Dictionary Title: String 18 V Children Array V Item 0 Dictionary Title String 30 etc. Thanks in advance, this site rocks.

    Read the article

  • Rails AR validates_uniqueness_of against polymorphic relationship

    - by aaronrussell
    Is it posible to validate the uniqueness of a child model's attribute scoped against a polymorphic relationship? For example I have a model called field that belongs to fieldable: class Field < ActiveRecord::Base belongs_to :fieldable, :polymorphic => :true validates_uniqueness_of :name, :scope => :fieldable_id end I have several other models (Pages, Items) which have many Fields. So what I want is to validate the uniqueness of the field name against the parent model, but the problem is that occasionally a Page and an Item share the same ID number, causing the validations to fail when they shouldn't. Am I just doing this wrong or is there a better way to do this?

    Read the article

  • Where to start when programming process synchronization algorithms like clone/fork, semaphores

    - by David
    I am writing a program that simulates process synchronization. I am trying to implement the fork and semaphore techniques in C++, but am having trouble starting off. Do I just create a process and send it to fork from the very beginning? Is the program just going to be one infinite loop that goes back and forth between parent/child processes? And how do you create the idea of 'shared memory' in C++, explicit memory address or just some global variable? I just need to get the overall structure/idea of the flow of the program. Any references would be appreciated.

    Read the article

  • FlexNativeMenu Problem in Air Application

    - by glatour
    I am trying to add a flexnativemenu to my air application and I have some problems... When I set it in the windowApplication, I get null errors in child controls so I tried to set this.menu = myMainMenu (which is my flexnativemenu) in the creation complete event but the menu doesn't show up... I tried to show an alert message before setting the menu and the menu appeared in the application. Is there a method I can call to force the application to "redraw" itself in the creationComplete event??? Thanks!

    Read the article

  • Ruby and Forking

    - by Cory
    Quick question about Ruby forking - I ran across a bit of forking code in Resque earlier that was sexy as hell but tripped me up for a few. I'm hoping for someone to give me a little more detail about what's going on here. Specifically - it would appear that forking spawns a child (expected) and kicks it straight into the 'else' side of my condition (less expected. Is that expected behavior? A Ruby idiom? My IRB hack here: def fork return true if @cant_fork begin if Kernel.respond_to?(:fork) Kernel.fork else raise NotImplementedError end rescue NotImplementedError @cant_fork = true nil end end def do_something puts "Starting do_something" if foo = fork puts "we are forking from #{Process.pid}" Process.wait else puts "no need to fork, let's get to work: #{Process.pid} under #{Process.ppid}" puts "doing it" end end do_something

    Read the article

  • Project change makes qtp to fail

    - by Onnesh
    We are using 2 or more projects in an application to be opened. For e.g. HT1000 & HT1200 will be opened by the application, objects are the same(or common) for both the projects. Code uses the values in excel framework for running the test cases as parent to identify the child objects for e.g. Window("HT1000").Dialog("parts").Click("OK") but when we just change the parent name in excel framework as "HT1200" the objects for HT1200 are not getting accessed. How to resolve this? Is it needed to again add the HT1200 project & objects in the object repo of qtp?

    Read the article

  • Unix Piping using Fork and Dup

    - by Jacob
    Lets say within my program I want to execute two child processes, one to to execute a "ls -al" command and then pipe that into "wc" command and display the output on the terminal. How can I do this using pipe file descriptors so far the code I have written: An example would be greatly helpful int main(int argc, char *argv[]) { int pipefd[2] pipe(pipefd2); if ((fork()) == 0) { dup2(pipefd2[1],STDOUT_FILENO); close(pipefd2[0]); close(pipefd2[1]); execl("ls", "ls","-al", NULL); exit(EXIT_FAILURE); } if ((fork()) == 0){ dup2(pipefd2[0],STDIN_FILENO); close(pipefd2[0]); close(pipefd2[1]); execl("/usr/bin/wc","wc",NULL); exit(EXIT_FAILURE); } close(pipefd[0]); close(pipefd[1]); close(pipefd2[0]); close(pipefd2[1]); }

    Read the article

  • ArrangeOverride Vs Storyboard Animation

    - by user275561
    Now I may not grasp the idea or this could be a mistake so feel free to correct me. I am doing a bubble breaker game in Silverlight. So when a bubble in a column gets bursted. I want to animate the above bubbles to simulate that they are being dropped. Each bubble knows its Row and column location and that gets updated in the View Model. Now my question is, Should I call invalidateArrange() on the Canvas from the ViewModel so it rearranges the bubbles or just have a storyboard animate the TranslateY. In my arrangeOverride Method I have something like this Rect childBounds = new Rect(CalculateLeft(dataContext.Column), CalculateTop(dataContext.Row), BubbleSize, BubbleSize); child.Arrange(childBounds); If there is a better way let me know. I am trying to learn the best practices.

    Read the article

  • Deletion procedure for a Binary Search Tree

    - by Metz
    Consider the deletion procedure on a BST, when the node to delete has two children. Let's say i always replace it with the node holding the minimum key in its right subtree. The question is: is this procedure commutative? That is, deleting x and then y has the same result than deleting first y and then x? I think the answer is no, but i can't find a counterexample, nor figure out any valid reasoning. EDIT: Maybe i've got to be clearer. Consider the transplant(node x, node y) procedure: it replace x with y (and its subtree). So, if i want to delete a node (say x) which has two children i replace it with the node holding the minimum key in its right subtree: y = minimum(x.right) transplant(y, y.right) // extracts the minimum (it doesn't have left child) y.right = x.right y.left = x.left transplant(x,y) The question was how to prove the procedure above is not commutative.

    Read the article

  • Get all window handles for a process

    - by Jeremy
    Using Microsoft Spy++, I can see that the following windows that belong to a process: Process XYZ window handles, displayed in tree form just like Spy++ gives me: A B C D E F G H I J K I can get the process, and the MainWindowHandle property points to the handle for window F. If I enumerate the child windows using I can get a list of window handles for G through K, but I can't figure out how to find the window handles for A through D. How can I enumerate windows that are not children of the handle specified by MainWindowHandle of the Process object? To enumerate I'm using the win32 call: [System.Runtime.InteropServices.DllImport(strUSER32DLL)] public static extern int EnumChildWindows(IntPtr hWnd, WindowCallBack pEnumWindowCallback, int iLParam);

    Read the article

  • Adding a custom control to a page, then adding multiple custom children into that one, null user con

    - by Rickjaah
    Hello all, While nerding my way through the day again. I came across a problem concerning adding children to an already add child control. I can add the controls, but when trying to use the controls in the added control, they all return null. This is the method: protected override CreateChildControls(EventArgs e) { UserControl uControl = LoadControl("~/controls/TwoColumn.ascx"); PlaceHolder holder = uControl.Controls.FindControl(phrContentMiddle) as PlaceHolder; holder.Controls.Add(LoadControl("~/controls/ImageShowControl"); } When i try to call any type of button/UserControl inside the ImageShowControl.... All return null. Is this something in the Page LifeCycle? If so, what is the way to go to realize this?

    Read the article

  • how to listen for changes in Contact Database

    - by hap497
    Hi, I am trying to listen for any change in the contact database. So I create my contentObserver which is a child class of ContentObserver: private class MyContentObserver extends ContentObserver { public MyContentObserver() { super(null); } @Override public void onChange(boolean selfChange) { super.onChange(selfChange); System.out.println (" Calling onChange" ); } } MyContentObserver contentObserver = new MyContentObserver(); context.getContentResolver().registerContentObserver (People.CONTENT_URI, true, contentObserver); But When I use 'EditContactActivity' to change the contact database, My onChange function does not get called.

    Read the article

  • What are the useful UNIX functions that MS doesn't implement? And why? [closed]

    - by prosseek
    When programming with Python, I came across some functions that are not implemented on Windows. os.fork() may be one of them. UNIX came before WinNT, so the WinNT developers (most notably Dave Cutler) must knew about the features and functions of the UNIX. But, to me, it seems that MS didn't like UNIX so much that they mistakenly/intentionally skipped or distorted some of the useful UNIX functions/features; i.e. /abc/def in UNIX, \abc\def in Windows as an easy example. And when I read the Windows System Programming book, I felt uncomfortable as the Windows system functions seem nothing more than a tweak from UNIX. (I might be wrong.) What are those functions/features that MS OSes don't have, but UNIX origninated? Is there any reason for this? Do they just want to differentiate from UNIX world? Or do they think some of the UNIX functions are unnecessary? Is Windows a tweak from UNIX? Or, is there any great OS features that were invented in MS to make Windows better than UNIX?

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • Retrieving Gtk::Widget's relative position: get_allocation() doesn't work

    - by a-v
    I need to retrieve the position of a Gtk::Widget relative to its parent, a Gtk::Table. Most sources (e.g. http://library.gnome.org/devel/gtk-faq/stable/x642.html) say that one needs to call Gtk::Widget::get_allocation(). However, the returned Gtk::Allocation object always contains x = -1, y = -1, width = 1, height = 1. I have to note that this happens before the Gtk::Table object is actually exposed and rendered. A call to show_all_children() or check_resize(), which I would expect to recalculate child widget geometry, doesn't help. What am I doing wrong? Thanks in advance.

    Read the article

  • Sort by an object's type

    - by Richard Levasseur
    Hi all, I have code that statically registers (type, handler_function) pairs at module load time, resulting in a dict like this: HANDLERS = { str: HandleStr, int: HandleInt, ParentClass: HandleCustomParent, ChildClass: HandleCustomChild } def HandleObject(obj): for data_type in sorted(HANDLERS.keys(), ???): if isinstance(obj, data_type): HANDLERS[data_type](obj) Where ChildClass inherits from ParentClass. The problem is that, since its a dict, the order isn't defined - but how do I introspect type objects to figure out a sort key? The resulting order should be child classes follow by super classes (most specific types first). E.g. str comes before basestring, and ChildClass comes before ParentClass. If types are unrelated, it doesn't matter where they go relative to each other.

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >