Search Results

Search found 6149 results on 246 pages for 'concept mapping'.

Page 224/246 | < Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >

  • idiomatic property changed notification in scala?

    - by Jeremy Bell
    I'm trying to find a cleaner alternative (that is idiomatic to Scala) to the kind of thing you see with data-binding in WPF/silverlight data-binding - that is, implementing INotifyPropertyChanged. First, some background: In .Net WPF or silverlight applications, you have the concept of two-way data-binding (that is, binding the value of some element of the UI to a .net property of the DataContext in such a way that changes to the UI element affect the property, and vise versa. One way to enable this is to implement the INotifyPropertyChanged interface in your DataContext. Unfortunately, this introduces a lot of boilerplate code for any property you add to the "ModelView" type. Here is how it might look in Scala: trait IDrawable extends INotifyPropertyChanged { protected var drawOrder : Int = 0 def DrawOrder : Int = drawOrder def DrawOrder_=(value : Int) { if(drawOrder != value) { drawOrder = value OnPropertyChanged("DrawOrder") } } protected var visible : Boolean = true def Visible : Boolean = visible def Visible_=(value: Boolean) = { if(visible != value) { visible = value OnPropertyChanged("Visible") } } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of INotifyPropertyChanged trait } } } For the sake of space, let's assume the INotifyPropertyChanged type is a trait that manages a list of callbacks of type (AnyRef, String) = Unit, and that OnPropertyChanged is a method that invokes all those callbacks, passing "this" as the AnyRef, and the passed-in String). This would just be an event in C#. You can immediately see the problem: that's a ton of boilerplate code for just two properties. I've always wanted to write something like this instead: trait IDrawable { val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible) { DrawOrder += 1 // Should trigger the PropertyChanged "Event" of ObservableProperty class } } } I know that I can easily write it like this, if ObservableProperty[T] has Value/Value_= methods (this is the method I'm using now): trait IDrawable { // on a side note, is there some way to get a Symbol representing the Visible field // on the following line, instead of hard-coding it in the ObservableProperty // constructor? val Visible = new ObservableProperty[Boolean]('Visible, true) val DrawOrder = new ObservableProperty[Int]('DrawOrder, 0) def Mutate() : Unit = { if(Visible.Value) { DrawOrder.Value += 1 } } } // given this implementation of ObservableProperty[T] in my library // note: IEvent, Event, and EventArgs are classes in my library for // handling lists of callbacks - they work similarly to events in C# class PropertyChangedEventArgs(val PropertyName: Symbol) extends EventArgs("") class ObservableProperty[T](val PropertyName: Symbol, private var value: T) { protected val propertyChanged = new Event[PropertyChangedEventArgs] def PropertyChanged: IEvent[PropertyChangedEventArgs] = propertyChanged def Value = value; def Value_=(value: T) { if(this.value != value) { this.value = value propertyChanged(this, new PropertyChangedEventArgs(PropertyName)) } } } But is there any way to implement the first version using implicits or some other feature/idiom of Scala to make ObservableProperty instances function as if they were regular "properties" in scala, without needing to call the Value methods? The only other thing I can think of is something like this, which is more verbose than either of the above two versions, but is still less verbose than the original: trait IDrawable { private val visible = new ObservableProperty[Boolean]('Visible, false) def Visible = visible.Value def Visible_=(value: Boolean): Unit = { visible.Value = value } private val drawOrder = new ObservableProperty[Int]('DrawOrder, 0) def DrawOrder = drawOrder.Value def DrawOrder_=(value: Int): Unit = { drawOrder.Value = value } def Mutate() : Unit = { if(Visible) { DrawOrder += 1 } } }

    Read the article

  • Unlocking a mutex from a different thread (C++)

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect.

    Read the article

  • Closure Tables - Is this enough data to display a tree view?

    - by James Pitt
    Here is the table I have created by testing the closure table method. | id | parentId | childId | hops | | | | | 270 | 6 | 6 | 0 | 271 | 7 | 7 | 0 | 272 | 8 | 8 | 0 | 273 | 9 | 9 | 0 | 276 | 10 | 10 | 0 | 281 | 9 | 10 | 1 | 282 | 7 | 9 | 1 | 283 | 7 | 10 | 2 | 285 | 7 | 8 | 1 | 286 | 6 | 7 | 1 | 287 | 6 | 9 | 2 | 288 | 6 | 10 | 3 | 289 | 6 | 8 | 2 | 293 | 6 | 9 | 1 | 294 | 6 | 10 | 2 I am trying to create a simple tree of this using PHP. There does not seem to be enough data to create the table. For example, when I look purely at parentId = 6: -Part 6 -Part 7 - ? - ? -Part 9 - ? - ? We know that parts 8 and 10 exists below Part 7 or 9, but not which. We know that part 10 exists at both 3 and 4 nodes deep but where? If I look at other data in the table it is possible to tell it should be: - Part 6 - Part 7 - Part 9 - Part 10 - Part 9 - Part 10 I thought one of the benefits of closure tables was there was no need for recursive queries? Could you help explain what I am doing wrong? EDIT: For clarification, this is a mapping table. There is another table called "parts" which has a column called part_id that correlates to both the parentId and childId columns in the "closure" table. The "id" column in the table above (closure) is just for the purposes of maintaining a primary key. It is not really necessary. The methods I have used to create this closure table is described in the following article: http://dirtsimple.org/2010/11/simplest-way-to-do-tree-based-queries.html EDIT2: It can have two and three hops. I will explain easier by assigning names to the items. Part 6 = Bicycle Part 7 = Gears Part 8 = Chain Part 9 = Bolt Part 10 = Nut Nut is part of Bolt. The Bolt and Nut combo exists directly within Bicycle and within Gears which is part of Bicycle. In relation to what method to use I have looked at Adjacency, Edges, Enum Paths, Closures, DAGS(networks) and the Nested Set Model. I am still trying to work out what is what, but this is an extremely complex component database where there are multiple parents and any modification to a sub-tree must propogate through the other trees. More importantly there will be insertions, deletions and tree views that I wish to avoid recursion during general use, even at the cost of database space and query time during entry.

    Read the article

  • Silverlight? WPF? or Windows Form?

    - by Amit
    After Silverlight 4.0 has been released with new WPF, I am kind of confused with these technologies: Silverlight? WPF? Windows Form? The main motive that we want to achieve for BIG business project is following: Performance Security And platform independent** If I consider all above three points then only Silverlight is the option as I don’t want people buying emulator on MacOS for WPF or Windows Form. Now how good the Silverlight is for Business applications, I was completely against when Silverlight 2.0 was in the market but now it is Silverlight 4.0 and they have provided many new features (but still basics) that is required in any challenging business applications. Comparing Silverlight and WPF -* Silverlight and WPF are very new technology and if I'd to compare from these two then I'd prefer WPF because it can be considered stable and mature. But it is not same as Windows Form. -* If I go with Silverlight then I am sure about keep updating to the latest version of Silverlight. I remembered when we were developing software for version 2.0 then we'd to create our own framework with dynamic loading DLL, and then Navigation concept. But everything was got changed once Silverlight 3.0 came. I don't want this to be happening with this new product. -* If we go with WPF then we don't get the platform independence. Now, why not we just focus on making WPF and then move to Silverlght. As someone (Tim?) from Microsoft has said that the idea is to make Silverlight as close as WPF. But if that is the case then why XAML structure is different; I will not be convinced with by saying that .Net framework for SL is too small.. well the difference is coming from the namespace ? I was searching on this subject and found "Microsoft WPF-Silverlight Comparison Whitepaper v1.1.pdf". This guide is very good that gives you ins-outs about how can we build common apps that runs on both. But again, it is comparing Silverlight 2 and not 4. I am sure many architect/ developers/ project managers must be facing similar kind of questions in their premises and wants to initiate this discussion, if it has not been :). We've still got 2 weeks to make this decision, so I'm expecting everyone to participate, gurus?

    Read the article

  • Recommendations for a C++ polymorphic, seekable, binary I/O interface

    - by Trevor Robinson
    I've been using std::istream and ostream as a polymorphic interface for random-access binary I/O in C++, but it seems suboptimal in numerous ways: 64-bit seeks are non-portable and error-prone due to streampos/streamoff limitations; currently using boost/iostreams/positioning.hpp as a workaround, but it requires vigilance Missing operations such as truncating or extending a file (ala POSIX ftruncate) Inconsistency between concrete implementations; e.g. stringstream has independent get/put positions whereas filestream does not Inconsistency between platform implementations; e.g. behavior of seeking pass the end of a file or usage of failbit/badbit on errors Don't need all the formatting facilities of stream or possibly even the buffering of streambuf streambuf error reporting (i.e. exceptions vs. returning an error indicator) is supposedly implementation-dependent in practice I like the simplified interface provided by the Boost.Iostreams Device concept, but it's provided as function templates rather than a polymorphic class. (There is a device class, but it's not polymorphic and is just an implementation helper class not necessarily used by the supplied device implementations.) I'm primarily using large disk files, but I really want polymorphism so I can easily substitute alternate implementations (e.g. use stringstream instead of fstream for unit tests) without all the complexity and compile-time coupling of deep template instantiation. Does anyone have any recommendations of a standard approach to this? It seems like a common situation, so I don't want to invent my own interfaces unnecessarily. As an example, something like java.nio.FileChannel seems ideal. My best solution so far is to put a thin polymorphic layer on top of Boost.Iostreams devices. For example: class my_istream { public: virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) = 0; virtual std::streamsize read(char* s, std::streamsize n) = 0; virtual void close() = 0; }; template <class T> class boost_istream : public my_istream { public: boost_istream(const T& device) : m_device(device) { } virtual std::streampos seek(stream_offset off, std::ios_base::seekdir way) { return boost::iostreams::seek(m_device, off, way); } virtual std::streamsize read(char* s, std::streamsize n) { return boost::iostreams::read(m_device, s, n); } virtual void close() { boost::iostreams::close(m_device); } private: T m_device; };

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • Choosing a distributed shared memory solution

    - by mindas
    I have a task to build a prototype for a massively scalable distributed shared memory (DSM) app. The prototype would only serve as a proof-of-concept, but I want to spend my time most effectively by picking the components which would be used in the real solution later on. The aim of this solution is to take data input from an external source, churn it and make the result available for a number of frontends. Those "frontends" would just take the data from the cache and serve it without extra processing. The amount of frontend hits on this data can literally be millions per second. The data itself is very volatile; it can (and does) change quite rapidly. However the frontends should see "old" data until the newest has been processed and cached. The processing and writing is done by a single (redundant) node while other nodes only read the data. In other words: no read-through behaviour. I was looking into solutions like memcached however this particular one doesn't fulfil all our requirements which are listed below: The solution must at least have Java client API which is reasonably well maintained as the rest of app is written in Java and we are seasoned Java developers; The solution must be totally elastic: it should be possible to add new nodes without restarting other nodes in the cluster; The solution must be able to handle failover. Yes, I realize this means some overhead, but the overall served data size isn't big (1G max) so this shouldn't be a problem. By "failover" I mean seamless execution without hardcoding/changing server IP address(es) like in memcached clients when a node goes down; Ideally it should be possible to specify the degree of data overlapping (e.g. how many copies of the same data should be stored in the DSM cluster); There is no need to permanently store all the data but there might be a need of post-processing of some of the data (e.g. serialization to the DB). Price. Obviously we prefer free/open source but we're happy to pay a reasonable amount if a solution is worth it. In any way, paid 24hr/day support contract is a must. The whole thing has to be hosted in our data centers so SaaS offerings like Amazon SimpleDB are out of scope. We would only consider this if no other options would be available. Ideally the solution would be strictly consistent (as in CAP); however, eventual consistence can be considered as an option. Thanks in advance for any ideas.

    Read the article

  • If array is thread safe, what the issue with this function?

    - by Ajay Sharma
    I am totally lost with the things that is happening with my code.It make me to think & get clear with Array's thread Safe concept. Is NSMutableArray OR NSMutableDictionary Thread Safe ? While my code is under execution, the values for the MainArray get's changes although, that has been added to Array. Please try to execute this code, onyour system its very much easy.I am not able to get out of this Trap. It is the function where it is returning Array. What I am Looking to do is : -(Array) (Main Array) --(Dictionary) with Key Value (Multiple Dictionary in Main Array) ----- Above dictionary has 9 Arrays in it. This is the structure I am developing for Array.But even before #define TILE_ROWS 3 #define TILE_COLUMNS 3 #define TILE_COUNT (TILE_ROWS * TILE_COLUMNS) -(NSArray *)FillDataInArray:(int)counter { NSMutableArray *temprecord = [[NSMutableArray alloc] init]; for(int i = 0; i <counter;i++) { if([temprecord count]<=TILE_COUNT) { NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; if([temprecord count]==TILE_COUNT) { NSMutableDictionary *holderKey = [[NSMutableDictionary alloc]initWithObjectsAndKeys:temprecord,[NSString stringWithFormat:@"%d",[casesListArray count]+1],nil]; [self.casesListArray addObject:holderKey]; [holderKey release]; holderKey =nil; [temprecord removeAllObjects]; } } else { [temprecord removeAllObjects]; NSMutableDictionary *d1 = [[NSMutableDictionary alloc]init]; [d1 setValue:[NSString stringWithFormat:@"%d/2011",i+1] forKey:@"serial_data"]; [d1 setValue:@"Friday 13 Sep 12:00 AM" forKey:@"date_data"]; [d1 setValue:@"Description Details " forKey:@"details_data"]; [d1 setValue:@"Subject Line" forKey:@"subject_data"]; [temprecord addObject:d1]; d1= nil; [d1 release]; } } return temprecord; [temprecord release]; } What is the problem with this Code ? Every time there are 9 records in Array, it just replaces the whole Array value instead of just for specific key Value.

    Read the article

  • Coroutines in Java

    - by JUST MY correct OPINION
    I would like to do some stuff in Java that would be clearer if written using concurrent routines, but for which full-on threads are serious overkill. The answer, of course, is the use of coroutines, but there doesn't appear to be any coroutine support in the standard Java libraries and a quick Google on it brings up tantalising hints here or there, but nothing substantial. Here's what I've found so far: JSIM has a coroutine class, but it looks pretty heavyweight and conflates, seemingly, with threads at points. The point of this is to reduce the complexity of full-on threading, not to add to it. Further I'm not sure that the class can be extracted from the library and used independently. Xalan has a coroutine set class that does coroutine-like stuff, but again it's dubious if this can be meaningfully extracted from the overall library. It also looks like it's implemented as a tightly-controlled form of thread pool, not as actual coroutines. There's a Google Code project which looks like what I'm after, but if anything it looks more heavyweight than using threads would be. I'm basically nervous of something that requires software to dynamically change the JVM bytecode at runtime to do its work. This looks like overkill and like something that will cause more problems than coroutines would solve. Further it looks like it doesn't implement the whole coroutine concept. By my glance-over it gives a yield feature that just returns to the invoker. Proper coroutines allow yields to transfer control to any known coroutine directly. Basically this library, heavyweight and scary as it is, only gives you support for iterators, not fully-general coroutines. The promisingly-named Coroutine for Java fails because it's a platform-specific (obviously using JNI) solution. And that's about all I've found. I know about the native JVM support for coroutines in the Da Vinci Machine and I also know about the JNI continuations trick for doing this. These are not really good solutions for me, however, as I would not necessarily have control over which VM or platform my code would run on. (Indeed any bytecode manipulation system would suffer similar problems -- it would be best were this pure Java if possible. Runtime bytecode manipulation would restrict me from using this on Android, for example.) So does anybody have any pointers? Is this even possible? If not, will it be possible in Java 7?

    Read the article

  • Many-To-Many Query with Linq-To-NHibernate

    - by rjygraham
    Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help. I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure: CREATE TABLE [dbo].[Classes]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](100) NOT NULL, [StartDate] [datetime2](7) NOT NULL, [EndDate] [datetime2](7) NOT NULL, CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[Sections]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [ClassId] [bigint] NOT NULL, [InternalCode] [varchar](10) NOT NULL, CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED ( [Id] ASC ) CREATE TABLE [dbo].[SectionStudents]( [SectionId] [bigint] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED ( [SectionId] ASC, [UserId] ASC ) CREATE TABLE [dbo].[aspnet_Users]( [ApplicationId] [uniqueidentifier] NOT NULL, [UserId] [uniqueidentifier] NOT NULL, [UserName] [nvarchar](256) NOT NULL, [LoweredUserName] [nvarchar](256) NOT NULL, [MobileAlias] [nvarchar](16) NULL, [IsAnonymous] [bit] NOT NULL, [LastActivityDate] [datetime] NOT NULL, PRIMARY KEY NONCLUSTERED ( [UserId] ASC ) I omitted the foreign keys for brevity, but essentially this boils down to: A Class can have many Sections. A Section can belong to only 1 Class but can have many Students. A Student (aspnet_Users) can belong to many Sections. I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine. Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class. Here's what I've tried so far: 1. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.First(f => f.UserId == userId) != null select s); 2. var sections = (from s in this.Session.Linq<Sections>() where s.Class.StartDate <= DateTime.UtcNow && s.Class.EndDate > DateTime.UtcNow && s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId select s); Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try. The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message: Value cannot be null. Parameter name: session I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way. Thanks in advance to anyone who can help. Ryan

    Read the article

  • What is a proper way to store site-level global variables in a SharePoint site?

    - by ccomet
    One thing that has driven me nuts about SharePoint2007 is the apparent inability to have defineable settings that apply specifically to a site or site collection itself, and not the content. I mean, you have some pre-defined settings like the Site Logo, the Site Name, and various other things, but there doesn't appear to be anywhere to add new kinds of settings. The application I am working on needs to be able to create multiple kinds of "project site collections" that all follow a basic template, but have certain additional settings that apply specifically to that site collection and that one alone. In addition to the standard site name we also need to define the Project Number, the Project Name, and the Client Name. And given the requests of some of our clients, we also reach a point where we have to have configurable settings that alter how some of the workflows work, like whether files are marked with Letters or Numbers. Our current solution, which I'm hesitant about, has been to store an XML file on the SharePoint server. This file contains one node for each site collection, identified by the URL of the root site. Inside the node are all of the elements that need to be defined for that site collection. When we need them, we have to access the XML file (which will always require SPSecurity.RunWithElevatedPrivileges to access files right on the server) every time to load it and retrieve the data. There are a lot of automated processes which will have to do this, and I'm hesitant about the stability of this method when we reach hundreds of sites with thousands of files running tens of thousands of workflows, all wanting to access this file. Maybe they're unfounded worries, but I'd rather worry than risk everything breaking in a couple years. I was looking into the SPWeb object and found the AllProperties hashtable. It looks like just the kind of thing which might work, but I don't know how safe it is to be modifying this. I read through both MSDN and the WSS SDK but found nothing that clarified on adding completely new properties into AllProperties. Is it safe to use AllProperties for this kind of thing? Or is there yet another feature that I am missing, which could handle the concept of global variables at the site collection or site scope?

    Read the article

  • Overwhelmed by design patterns... where to begin?

    - by Pete
    I am writing a simple prototype code to demonstrate & profile I/O schemes (HDF4, HDF5, HDF5 using parallel IO, NetCDF, etc.) for a physics code. Since focus is on IO, the rest of the program is very simple: class Grid { public: floatArray x,y,z; }; class MyModel { public: MyModel(const int &nip1, const int &njp1, const int &nkp1, const int &numProcs); Grid grid; map<string, floatArray> plasmaVariables; }; Where floatArray is a simple class that lets me define arbitrary dimensioned arrays and do mathematical operations on them (i.e. x+y is point-wise addition). Of course, I could use better encapsulation (write accessors/setters, etc.), but that's not the concept I'm struggling with. For the I/O routines, I am envisioning applying simple inheritance: Abstract I/O class defines read & write functions to fill in the "myModel" object HDF4 derived class HDF5 HDF5 using parallel IO NetCDF etc... The code should read data in any of these formats, then write out to any of these formats. In the past, I would add an AbstractIO member to myModel and create/destroy this object depending on which I/O scheme I want. In this way, I could do something like: myModelObj.ioObj->read('input.hdf') myModelObj.ioObj->write('output.hdf') I have a bit of OOP experience but very little on the Design Patterns front, so I recently acquired the Gang of Four book "Design Patterns: Elements of Reusable Object-Oriented Software". OOP designers: Which pattern(s) would you recommend I use to integrate I/O with the myModel object? I am interested in answering this for two reasons: To learn more about design patterns in general Apply what I learn to help refactor an large old crufty/legacy physics code to be more human-readable & extensible. I am leaning towards applying the Decerator pattern to myModel, so I can attach the I/O responsibilities dynamically to myModel (i.e. whether to use HDF4, HDF5, etc.). However, I don't feel very confident that this is the best pattern to apply. Reading the Gang of Four book cover-to-cover before I start coding feels like a good way to develop an unhealthy caffeine addiction. What patterns do you recommend?

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • How can I have a Makefile automatically rebuild source files that include a modified header file? (I

    - by Nicholas Flynt
    I have the following makefile that I use to build a program (a kernel, actually) that I'm working on. Its from scratch and I'm learning about the process, so its not perfect, but I think its powerful enough at this point for my level of experience writing makefiles. AS = nasm CC = gcc LD = ld TARGET = core BUILD = build SOURCES = source INCLUDE = include ASM = assembly VPATH = $(SOURCES) CFLAGS = -Wall -O -fstrength-reduce -fomit-frame-pointer -finline-functions \ -nostdinc -fno-builtin -I $(INCLUDE) ASFLAGS = -f elf #CFILES = core.c consoleio.c system.c CFILES = $(foreach dir,$(SOURCES),$(notdir $(wildcard $(dir)/*.c))) SFILES = assembly/start.asm SOBJS = $(SFILES:.asm=.o) COBJS = $(CFILES:.c=.o) OBJS = $(SOBJS) $(COBJS) build : $(TARGET).img $(TARGET).img : $(TARGET).elf c:/python26/python.exe concat.py stage1 stage2 pad.bin core.elf floppy.img $(TARGET).elf : $(OBJS) $(LD) -T link.ld -o $@ $^ $(SOBJS) : $(SFILES) $(AS) $(ASFLAGS) $< -o $@ %.o: %.c @echo Compiling $<... $(CC) $(CFLAGS) -c -o $@ $< #Clean Script - Should clear out all .o files everywhere and all that. clean: -del *.img -del *.o -del assembly\*.o -del core.elf My main issue with this makefile is that when I modify a header file that one or more C files include, the C files aren't rebuilt. I can fix this quite easily by having all of my header files be dependencies for all of my C files, but that would effectively cause a complete rebuild of the project any time I changed/added a header file, which would not be very graceful. What I want is for only the C files that include the header file I change to be rebuilt, and for the entire project to be linked again. I can do the linking by causing all header files to be dependencies of the target, but I cannot figure out how to make the C files be invalidated when their included header files are newer. I've heard that GCC has some commands to make this possible (so the makefile can somehow figure out which files need to be rebuilt) but I can't for the life of me find an actual implementation example to look at. Can someone post a solution that will enable this behavior in a makefile? EDIT: I should clarify, I'm familiar with the concept of putting the individual targets in and having each target.o require the header files. That requires me to be editing the makefile every time I include a header file somewhere, which is a bit of a pain. I'm looking for a solution that can derive the header file dependencies on its own, which I'm fairly certain I've seen in other projects.

    Read the article

  • Create a Color Picker, similar to Photoshop's, using Javascript and HTML Canvas

    - by André Alçada Padez
    I am not at all versed in Computer Graphics and am in need of creating a color picker as a javascript tool to embed in an HTML page. First, and looking at Photoshop's one, i thought of the RGB palette as a three-dimensional matrix. My first attempt envolved: <script type="text/javascript"> var rgCanvas = document.createElement('canvas'); rgCanvas.width = 256; rgCanvas.height = 256; rgCanvas.style.border = '3px solid black'; for (g = 0; g < 256; g++){ for (r = 0; r < 256; r++){ var context = rgCanvas.getContext('2d'); context.beginPath(); context.moveTo(r,g); context.strokeStyle = 'rgb(' + r + ', ' + g + ', 0)'; context.lineTo(r+1,g+1); context.stroke(); context.closePath(); } } var bCanvas = document.createElement('canvas'); bCanvas.width = 20; bCanvas.height = 256; bCanvas.style.border = '3px solid black'; for (b = 0; b < 256; b++){ var context = bCanvas.getContext('2d'); context.beginPath(); context.moveTo(0,b); context.strokeStyle = 'rgb(' + 0 + ', ' + 0 + ', ' + b + ')'; context.lineTo(20, b); context.stroke(); context.closePath(); } document.body.appendChild(rgCanvas); document.body.appendChild(bCanvas); </script> this results in something like My thought is this is too linear, comparing to the ones i see in Photoshop and on the web. I would like to know the logic behind the color mapping in a picker like this: I don't really need the algorythms itself, i'm mainly trying to understand the logic. Thanks

    Read the article

  • MVC Entity Framework: Cannot open user default database. Login failed.

    - by Michael
    This type of stuff drives me nuts. I'm having trouble finding the exact issue that I'm having, maybe I just don't know the terminology. Anyway, I had a working website using MVC and Entity Framework, but then I coded an error in a partial view page (ascx). Then all of a sudden I started to get this message. Cannot open user default database. Login failed. Login failed for user 'NT AUTHORITY\SYSTEM' I've found plenty of suggestions about opening SQL Server Management Studio, Double Click on Security, Double Click on Logins, Double click on NT AUTHORITY\SYSTEM and then double click on User Mapping. In this view I'm suppose to check the box for my database so that this user is mapped to this login. However, since I created my database in Visio Studio 2008 as part of my solution, it doesn't show up to allow me to click on it. So what do I do now? What drives me nuts is that everything was working fine. I was using my computer name to access the website and everything was working fine until the coding error. I've fix the error but still getting the error. I should also mention that this error started yesterday too around the same time but later cleared itself up. If I use localhost to access the site, it works just fine. IIS7 configuration for my website: Application Pool = DefaultAppPool Physical Path Credentials Logon = ClearText With in connection strings. I do see the one for my database in this solution. Entry Type is local metadata=res://*/Models.DataModel.csdl|res://*/Models.DataModel.ssdl|res://*/Models.DataModel.msl; provider=System.Data.SqlClient; provider connection string="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\FFBall.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True" And somewhere I remember changing the identity from Network Service to LocalSystem. Because when I first stared I was getting this same message, but I changed this value and it started working. I saw that suggested somewhere too but I do not recall. Wait I remember now, I believe in IIS7, under Application Pools, DefaultAppPool identity is set to LocalSystem. Additional things I've tried. Restart the computer Recycle the application pool. Antivirus isn't running. Any help would be appreciated. Thank you in advance.

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Intermittent "Specified cast is invalid" with StructureMap injected data context

    - by FreshCode
    I am intermittently getting an System.InvalidCastException: Specified cast is not valid. error in my repository layer when performing an abstracted SELECT query mapped with LINQ. The error can't be caused by a mismatched database schema since it works intermittently and it's on my local dev machine. Could it be because StructureMap is caching the data context between page requests? If so, how do I tell StructureMap v2.6.1 to inject a new data context argument into my repository for each request? Update: I found this question which correlates my hunch that something was being re-used. Looks like I need to call Dispose on my injected data context. Not sure how I'm going to do this to all my repositories without copypasting a lot of code. Edit: These errors are popping up all over the place whenever I refresh my local machine too quickly. Doesn't look like it's happening on my remote deployment box, but I can't be sure. I changed all my repositories' StructureMap life cycles to HttpContextScoped() and the error persists. Code: public ActionResult Index() { // error happens here, which queries my page repository var page = _branchService.GetPage("welcome"); if (page != null) ViewData["Welcome"] = page.Body; ... } Repository: GetPage boils down to a filtered query mapping in my page repository. public IQueryable<Page> GetPages() { var pages = from p in _db.Pages let categories = GetPageCategories(p.PageId) let revisions = GetRevisions(p.PageId) select new Page { ID = p.PageId, UserID = p.UserId, Slug = p.Slug, Title = p.Title, Description = p.Description, Body = p.Text, Date = p.Date, IsPublished = p.IsPublished, Categories = new LazyList<Category>(categories), Revisions = new LazyList<PageRevision>(revisions) }; return pages; } where _db is an injected data context as an argument, stored in a private variable which I reuse for SELECT queries. Error: Specified cast is not valid. Exception Details: System.InvalidCastException: Specified cast is not valid. Stack Trace: [InvalidCastException: Specified cast is not valid.] System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) +4539 System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) +207 System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) +500 System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute(Expression expression) +50 System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) +383 Manager.Controllers.SiteController.Index() in C:\Projects\Manager\Manager\Controllers\SiteController.cs:68 lambda_method(Closure , ControllerBase , Object[] ) +79 System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters) +258 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +39 System.Web.Mvc.<>c__DisplayClassd.<InvokeActionMethodWithFilters>b__a() +125 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation) +640 System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters) +312 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +709 System.Web.Mvc.Controller.ExecuteCore() +162 System.Web.Mvc.<>c__DisplayClass8.<BeginProcessRequest>b__4() +58 System.Web.Mvc.Async.<>c__DisplayClass1.<MakeVoidDelegate>b__0() +20 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +453 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +371

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • ASP.NET Application Level vs. Session Level and Global.asax...confused

    - by contactmatt
    The following text is from the book I'm reading, 'MCTS Self-Paced Training Kit (Exam 70-515) Web Applications Development with ASP.NET 4". It gives the rundown of the Application Life Cycle. A user first makes a request for a page in your site. The request is routed to the processing pipeline, which forwards it to the ASP.NET runtime. The ASP.NET runtime creates an instance of the ApplicationManager class; this class instance represents the .NET framework domain that will be used to execute requests for your application. An application domain isolates global variables from other applications and allows each application to load and unload separately, as required. After the application domain has been created, an instance of the HostingEnvironment class is created. This class provides access to items inside the hosting environment, such as directory folders. ASP.NET creates instances of the core objects that will be used to process the request. This includes HttpContext, HttpRequest, and HttpResponse objects. ASP.NET creates an instance of the HttpApplication class (or an instance is reused). This class is also the base class for a site’s Global.asax file. You can use this class to trap events that happen when your application starts or stops. When ASP.NET creates an instance of HttpApplication, it also creates the modules configured for the application, such as the SessionStateModule. Finally, ASP.NET processes request through the HttpApplication pipleline. This pipeline also includes a set of events for validating requests, mapping URLs, accessing the cache, and more. The book then demonstrated an example of using the Global.asax file: <script runat="server"> void Application_Start(object sender, EventArgs e) { Application["UsersOnline"] = 0; } void Session_Start(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] + 1; Application.UnLock(); } void Session_End(object sender, EventArgs e) { Application.Lock(); Application["UsersOnline"] = (int)Application["UsersOnline"] - 1; Application.UnLock(); } </script> When does an application start? Whats the difference between session and application level? I'm rather confused on how this is managed. I thought that Application level classes "sat on top of" an AppDomain object, and the AppDomain contained information specific to that Session for that user. Could someone please explain how IIS manages Applicaiton level classes, and how an HttpApplication class sits under an AppDomain? Anything is appreciated.

    Read the article

  • BizTalk - generating schema from Oracle stored proc with table variable argument

    - by Ron Savage
    I'm trying to set up a simple example project in BizTalk that gets changes made to a table in a SQL Server db and updates a copy of that table in an Oracle db. On the SQL Server side, I have a stored proc named GetItemChanges() that returns a variable number of records. On the Oracle side, I have a stored proc named Update_Item_Region_Table() designed to take a table of records as a parameter so that it can process all the records returned from GetItemChanges() in one call. It is defined like this: create or replace type itemrec is OBJECT ( UPC VARCHAR2(15), REGION VARCHAR2(5), LONG_DESCRIPTION VARCHAR2(50), POS_DESCRIPTION VARCHAR2(30), POS_DEPT VARCHAR2(5), ITEM_SIZE VARCHAR2(10), ITEM_UOM VARCHAR2(5), BRAND VARCHAR2(10), ITEM_STATUS VARCHAR2(5), TIME_STAMP VARCHAR2(20), COSTEDBYWEIGHT INTEGER ); create or replace type tbl_of_rec is table of itemrec; create or replace PROCEDURE Update_Item_Region_table ( Item_Data tbl_of_rec ) IS errcode integer; errmsg varchar2(4000); BEGIN for recIndex in 1 .. Item_Data.COUNT loop update FL_ITEM_REGION_TEST set Region = Item_Data(recIndex).Region, Long_description = Item_Data(recIndex).Long_description, Pos_Description = Item_Data(recIndex).Pos_description, Pos_Dept = Item_Data(recIndex).Pos_dept, Item_Size = Item_Data(recIndex).Item_Size, Item_Uom = Item_Data(recIndex).Item_Uom, Brand = Item_Data(recIndex).Brand, Item_Status = Item_Data(recIndex).Item_Status, Timestamp = to_date(Item_Data(recIndex).Time_stamp, 'yyyy-mm-dd HH24:mi:ss'), CostedByWeight = Item_Data(recIndex).CostedByWeight where UPC = Item_Data(recIndex).UPC; log_message(Item_Data(recIndex).Region, '', 'Updated item ' || Item_Data(recIndex).UPC || '.'); end loop; EXCEPTION WHEN OTHERS THEN errcode := SQLCODE(); errmsg := SQLERRM(); log_message('CE', '', 'Error in Update_Item_Region_table(): Code [' || errcode || '], Msg [' || errmsg || '] ...'); END; In my BizTalk project I generate the schemas and binding information for both stored procedures. For the Oracle procedure, I specified a path for the GeneratedUserTypesAssemblyFilePath parameter to generate a DLL to contain the definition of the data types. In the Send Port on the server, I put the path to that Types DLL in the UserAssembliesLoadPath parameter. I created a map to translate the GetItemChanges() schema to the Update_Item_Region_Table() schema. When I run it the data is extracted and transformed fine but causes an exception trying to pass the data to the Oracle proc: *The adapter failed to transmit message going to send port "WcfSendPort_OracleDBBinding_HOST_DATA_Procedure_Custom" with URL "oracledb://dvotst/". It will be retransmitted after the retry interval specified for this Send Port. Details:"System.InvalidOperationException: Custom type mapping for 'HOST_DATA.TBL_OF_REC' is not specified or is invalid.* So it is apparently not getting the information about the custom data type TBL_OF_REC into the Types DLL. Any tips on how to make this work?

    Read the article

  • What is fastest way to convert bool to byte?

    - by Amir Rezaei
    What is fastest way to convert bool to byte? I want this mapping: False=0, True=1 Note: I don't want to use any if statement. Update: I don't want to use conditional statement. I don't want the CPU to halt or guess next statement. I want to optimize this code: private static string ByteArrayToHex(byte[] barray) { char[] c = new char[barray.Length * 2]; byte k; for (int i = 0; i < barray.Length; ++i) { k = ((byte)(barray[i] >> 4)); c[i * 2] = (char)(k > 9 ? k + 0x37 : k + 0x30); k = ((byte)(barray[i] & 0xF)); c[i * 2 + 1] = (char)(k > 9 ? k + 0x37 : k + 0x30); } return new string(c); } Update: The length of the array is very large, it's in terabyte order! Therefore I need to do optimization if possible. I shouldn't need to explain my self. The question is still valid. Update: I'm working on a project and looking at others code. That's why I didn't provide with the function at first place. I didn't want to spend time on explaining for people when they have opinion about the code. I shouldn’y need to provide in my question the background of my work, and a function that is not written by me. I have started to optimize it part by part. If I needed help with the whole function I would asked that in another question. That is why I asked this very simple at the beginning. Unfortunately people couldn’t keep themselves to the question. So please if you want to help answer the question. Update: For dose who want to see the point of this question. This example shows how two if statement are reduced from the code. byte A = k > 9 ; //If it was possible (k>9) == 0 || 1 c[i * 2] = A * (k + 0x30) - (A - 1) * (k + 0x30);

    Read the article

  • How to perform add/update of a model object that contains EntitySet

    - by David Liddle
    I have a similar concept to the SO questions/tags scenario however am trying to decide the best way of implementation. Tables Questions, QuestionTags and Tags Questions QuestionTags Tags --------- ------------ ---- QID QID TID QName TID TName When adding/updating a question I have 2 textboxes. The important part is a single textbox that allows users to enter in multiple Tags separated by spaces. I am using Linq2Sql so the Questions model has an EntitySet of QuestionTags with then link to Tags. My question is regarding the adding/updating of Questions (part 1), and also how to best show QuestionTags for a Question (part 2). Part 1 Before performing an add/update, my service layer needs to deal with 3 scenarios before passing to their respective repositories. Insert Tags that do not already exist Insert/Update Question Insert QuestionTags - when updating need to remove existing QuestionTags Here is my code below however started to get into a bit of a muddle. I've created extension methods on my repositories to get Tags WithNames etc. public void Add(Question q, string tags) { var tagList = tags.Split(new string[] { " " }, StringSplitOptions.RemoveEmptyEntries).ToList(); using (DB.TransactionScope ts = new DB.TransactionScope()) { var existingTags = TagsRepository.Get() .WithName(tagList) .ToList(); var newTags = (from t in tagList select new Tag { TName = t }).Except(existingTags, new TagsComparer()).ToList(); TagsRepository.Add(newTags); //need to insert QuestionTags QuestionsRepository.Add(q); ts.Complete(); } } Part 2 My second question is, when displaying a list of Questions how is it best to show their QuestionTags? For example, I have an Index view that shows a list of Questions in a table. One of the columns shows an image and when the user hovers over it shows the list of Tags. My current implementation is to create a custom ViewModel and show a List of QuestionIndexViewModel in the View. QuestionIndexViewModel { Question Question { get; set; } string Tags { get; set; } } However, this seems a bit clumsy and quite a few DB calls. public ViewResult Index() { var model= new List<QuestionIndexViewModel>(); //make a call to get a list of questions //foreach question make a call to get their QuestionTags, //to be able to get their Tag names and then join them //to form a single string. return View(model); } Also, just for test purposes using SQL Profiler, I decided to iterate through the QuestionTags entity set of a Question in my ViewModel however nothing was picked up in Profiler? What would be the reason for this?

    Read the article

  • Hibernate MappingException Unknown entity: $Proxy2

    - by slynn1324
    I'm using Hibernate annotations and have a VERY basic data object: import java.io.Serializable; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class State implements Serializable { /** * */ private static final long serialVersionUID = 1L; @Id private String stateCode; private String stateFullName; public String getStateCode() { return stateCode; } public void setStateCode(String stateCode) { this.stateCode = stateCode; } public String getStateFullName() { return stateFullName; } public void setStateFullName(String stateFullName) { this.stateFullName = stateFullName; } } and am trying to run the following test case: public void testCreateState(){ Session s = HibernateUtil.getSessionFactory().getCurrentSession(); Transaction t = s.beginTransaction(); State state = new State(); state.setStateCode("NE"); state.setStateFullName("Nebraska"); s.save(s); t.commit(); } and get an org.hibernate.MappingException: Unknown entity: $Proxy2 at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:121) .... I haven't been able to find anything referencing the $Proxy part of the error - and am at a loss.. Any pointers to what I'm missing would be greatly appreciated. hibernate.cfg.xml <property name="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:hsql://localhost/xdb</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="current_session_context_class">thread</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="show_sql">true</property> <property name="hbm2ddl.auto">update</property> <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <mapping class="com.test.domain.State"/> in HibernateUtil.java public static SessionFactory getSessionFactory(boolean testing ) { if ( sessionFactory == null ){ try { String configPath = HIBERNATE_CFG; AnnotationConfiguration config = new AnnotationConfiguration(); config.configure(configPath); sessionFactory = config.buildSessionFactory(); } catch (Exception e){ e.printStackTrace(); throw new ExceptionInInitializerError(e); } } return sessionFactory; }

    Read the article

  • How to add new object to an IList mapped as a one-to-many with NHibernate?

    - by Jørn Schou-Rode
    My model contains a class Section which has an ordered list of Statics that are part of this section. Leaving all the other properties out, the implementation of the model looks like this: public class Section { public virtual int Id { get; private set; } public virtual IList<Static> Statics { get; private set; } } public class Static { public virtual int Id { get; private set; } } In the database, the relationship is implemented as a one-to-many, where the table Static has a foreign key pointing to Section and an integer column Position to store its index position in the list it is part of. The mapping is done in Fluent NHibernate like this: public SectionMap() { Id(x => x.Id); HasMany(x => x.Statics).Cascade.All().LazyLoad() .AsList(x => x.WithColumn("Position")); } public StaticMap() { Id(x => x.Id); References(x => x.Section); } Now I am able to load existing Statics, and I am also able to update the details of those. However, I cannot seem to find a way to add new Statics to a Section, and have this change persisted to the database. I have tried several combinations of: mySection.Statics.Add(myStatic) session.Update(mySection) session.Save(myStatic) but the closest I have gotten (using the first two statements), is to an SQL exception reading: "Cannot insert the value NULL into column 'Position'". Clearly an INSERT is attempted here, but NHibernate does not seem to automatically append the index position to the SQL statement. What am I doing wrong? Am I missing something in my mappings? Do I need to expose the Position column as a property and assign a value to it myself? EDIT: Apparently everything works as expected, if I remove the NOT NULL constraint on the Static.Position column in the database. I guess NHibernate makes the insert and immediatly after updates the row with a Position value. While this is an anwers to the question, I am not sure if it is the best one. I would prefer the Position column to be not nullable, so I still hope there is some way to make NHibernate provide a value for that column directly in the INSERT statement. Thus, the question is still open. Any other solutions?

    Read the article

< Previous Page | 220 221 222 223 224 225 226 227 228 229 230 231  | Next Page >