Search Results

Search found 5910 results on 237 pages for 'entity splitting'.

Page 208/237 | < Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >

  • Need Advice on building the database. all in one table or split?

    - by Ibrahim Azhar Armar
    hello, i am developing an application for a real-estate company. the problem i am facing is about implementing the database. however i am just confused on which way to adopt i would appreciate if you could help me out in reasoning the database implementation. here is my situation. a) i have to store the property details in the database. b) the properties have approximately 4-5 categories to which it will belong for ex : resedential, commnercial, industrial etc. c) now the categories have sub-categories. for example. a residential category will have sub category such as. Apartment / Independent House / Villa / Farm House/ Studio Apartment etc. and hence same way commercial and industrial or agricultural will too have sub-categories. d) each sub-categories will have to store the different values. like a resident will have features like Bedrooms/ kitchens / Hall / bathroom etc. the features depends on the sub categories. for an example on how i would want to implement my application you can have a look at this site. http://www.magicbricks.com/bricks/postProperty.html i could possibly think of the solution like this. a) create four to five tables depending upon the categories which will be existing(the problem is categories might increase in the future). b) create different tables for all the features, location, price, description and merge the common property table into one. for example all the property will have the common entity such as location, total area, etc. what would you advice for me given the current situation. thank you

    Read the article

  • Linq-to-sql Compiled Query returns object NOT belonging to submitted DataContext ?

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is returning result from another data context [TestMethod] public void GetMachinesTest() { using (var unitOfWork = IoC.Get<IUnitOfWork>()) { // Compile Query var machine = Machines.GetMachineById(unitOfWork, 3); } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // Get From Repository var machineFromRepository = machineRepository.Find(m => m.MachineID == 2).SingleOrDefault(); var machine = Machines.GetMachineById(unitOfWork, 2); VerifyHuskyHostMachine(machineFromRepository, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); VerifyHuskyHostMachine(machine, 2, "Machine 2", "222222", "H400RS", "MachineIconB.xaml", false, true, LicenseType.Licensed, InterfaceType.HuskyHostV2, "10.0.97.2:8080", "10.0.97.2", 8080, "4.0"); Assert.AreSame(machineFromRepository, machine); // FAIL } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. Another Important information is that this test is under TransactionScope! UPDATE: It looks like next link is describing similar problem (is this bug solved ?): http://social.msdn.microsoft.com/Forums/en-US/linqprojectgeneral/thread/9bcffc2d-794e-4c4a-9e3e-cdc89dad0e38

    Read the article

  • WCF Service Layer in n-layered application: performance considerations

    - by Marconline
    Hi all. When I went to University, teachers used to say that in good structured application you have presentation layer, business layer and data layer. This is what I heard for more than 5 years. When I started working I discovered that this is true but sometimes is better to have more than just three layers. Two or three days ago I discovered this article by John Papa that explain how to use Entity Framework in layered application. According to that article you should have: UI Layer and Presentation Layer (Model View Pattern) Service Layer (WCF) Business Layer Data Access Layer Service Layer is, to me, one of the best ideas I've ever heard since I work. Your UI is then completely "diconnected" from Business and Data Layer. Now when I went deeper by looking into provided source code, I began to have some questions. Can you help me in answering them? Question #0: is this a good enterpise application template in your opinion? Question #1: where should I host the service layer? Should it be a Windows Service or what else? Question #2: in the source code provided the service layer expose just an endpoint with WSHttpBinding. This is the most interoperable binding but (I think) the worst in terms of performances (due to serialization and deserializations of objects). Do you agree? Question #3: if you agree with me at Question 2, which kind of binding would you use? Looking forward to hear from you. Have a nice weekend! Marco

    Read the article

  • Mapping a child collection without indexing based on database primary key or using bag

    - by Colin Bowern
    I have a existing parent-child relationship I am trying to map in Fluent Nhibernate: [RatingCollection] -- [Rating] Rating Collection has: ID (database generated ID) Code Name Rating has: ID (database generated id) Rating Collection ID Code Name I have been trying to figure out which permutation of HasMany makes sense here. What I have right now: HasMany<Rating>(x => x.Ratings) .WithTableName("Rating") .KeyColumnNames.Add("RatingCollectionId") .Component(c => { c.Map(x => x.Code); c.Map(x => x.Name); ); It works from a CRUD perspective but because it's a bag it ends up deleting the rating contents any time I try to do a simple update / insert to the Ratings property. What I want is an indexed collection but not using the database generated ID (which is in the six digit range right now). Any thoughts on how I could get a zero-based indexed collection (so I can go entity.Ratings[0].Name = "foo") which would allow me to modify the collection without deleting/reinserting it all when persisting?

    Read the article

  • Hibernate "JOIN ... ON"?

    - by CaptainAwesomePants
    I have an application that uses Hibernate for its domain objects. One part of the app is common between a few apps, and it has no knowledge of the other systems. In order to handle relations, our class looks like this: @Entity public class SystemEvent { @Id @GeneratedValue public int entity_id; @Column(name="event_type") public String eventType; @Column(name="related_id") public int relatedObjectId; } relatedObjectId holds a foreign key to one of several different objects, depending on the type of event. When a system wants to know about events that are relevant to its interests, it grabs all the system events with eventType "NewAccounts" or some such thing, and it knows that all of those relatedObjectIds are IDs to a "User" object or similar. Unfortunately, this has caused a problem down the line. I can't figure out a way to tell Hibernate about this mapping, which means that HQL queries can't do joins. I'd really like to create an HQL query that looks like this: SELECT users FROM SystemEvent event join Users newUsers where event.eventType = 'SignUp' However, Hibernate has no knowledge of the relationship between SystemEvent and Users, and as far as I can tell, there's no way to tell it. So here's my question: Is there any way to tell Hibernate about a relationship when your domain objects reference each other via ID numbers and not class references?

    Read the article

  • Fluent NHibernate IDictionary with composite element mapping

    - by Alessandro Di Lello
    Hi there, i have these 2 classes: public class Category { IDictionary<string, CategoryResorce> _resources; } public class CategoryResource { public virtual string Name { get; set; } public virtual string Description { get; set; } } and this is xml mapping <class name="Category" table="Categories"> <id name="ID"> <generator class="identity"/> </id> <map name="Resources" table="CategoriesResources" lazy="false"> <key column="EntityID" /> <index column="LangCode" type="string"/> <composite-element class="Aca3.Models.Resources.CategoryResource"> <property name="Name" column="Name" /> <property name="Description" column="Description"/> </composite-element> </map> </class> and i'd like to write it with Fluent. I found something similar and i was trying with this code: HasMany(x => x.Resources) .AsMap<string>("LangCode") .AsIndexedCollection<string>("LangCode", c => c.GetIndexMapping()) .Cascade.All() .KeyColumn("EntityID"); but i dont know how to map the CategoryResource entity as a composite element inside the Category element. Any advice ? thanks

    Read the article

  • How to implement a .net 3-tier architecture using Winforms

    - by Anders Jakobsen
    I have for some time build n-tier Applications using a database server as the data tier, Winforms as the presentation tier and an ASP.NET asmx webservice in the middle to send back and forth untyped Datasets. While this approach has worked for me so far, it certainly does feel outdated today. What technologies should I use if I were to create a similar architectured application today? .net 4.0 technology is welcome. I still want a database server as the datatier and the asmx webservices should probably be replaced by WCF. I would still like to have the presentation tier running as a desktop application (Winforms or WPF) so ignore ASP.net for this question. My main question really comes down to what to use as business objects. I want something that is easier to bind to the interface than untyped Datasets and strongly-typed datasets feels very heavy. I also need something that can track changes to make sure users do not override each other's changes in the database. Will the Entity Framework 4 be usable for a scenario like this? Are there any thorough guides available?

    Read the article

  • Persist subclass as superclass using Hibernate

    - by franziga
    I have a subclass and a superclass. However, only the fields of the superclass are needed to be persist. session.saveOrUpdate((Superclass) subclass); If I do the above, I will get the following exception. org.hibernate.MappingException: Unknown entity: test.Superclass at org.hibernate.impl.SessionFactoryImpl.getEntityPersister(SessionFactoryImpl.java:628) at org.hibernate.impl.SessionImpl.getEntityPersister(SessionImpl.java:1366) at org.hibernate.engine.ForeignKeys.isTransient(ForeignKeys.java:203) at org.hibernate.event.def.AbstractSaveEventListener.getEntityState(AbstractSaveEventListener.java:535) at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:103) at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:93) at org.hibernate.impl.SessionImpl.fireSaveOrUpdate(SessionImpl.java:535) at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:527) at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:523) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:342) at $Proxy54.saveOrUpdate(Unknown Source) How can I persist a subclass as a superclass? I do not prefer creating a superclass instance and then passing the values from the subclass instance. Because, it is easy to forget updating the logic if extra fields are added to superclass in the future.

    Read the article

  • Database structure - is mySQL the right choice?

    - by Industrial
    Hi everyone, We are currently planning the database structure of a quite complex e-commerce web app that has flexibility as it's main cornerstone. Our app features a large amount of data (products) and we have run into a slight headache trying to keep performance high without compromizing normalization rules in the database, or leaving our highly beloved flexibility concept behind when integrating product options (also widely known as product attributes or parameters). Based on various references and sources available, we have made up lists on pros and cons of all major and well known database patterns to solve this. After comparing these, we have come up with two final alternatives: EAV (Entity-attribute-value model) : Pros: Database is used for all sorting. Cons: All related queries will include a number of joins between multiple tables in order to complete the collection of data. SLOB (Serialized LOB, also known as Facade?) : Pros: Very flexible. Keeping the number of necessary joins low compared to a EAV design pattern. Easy to update/add/remove data from each product. Cons: All sorting will be done by the application instead of the database. Will use lots of performance (memory?) when big datasets is processed by a large number of users. Our main questions: Which pattern/structure would you use, or maybe even a different solution? Is there better databases besides mySQL available nowadays to accomplish what we want? Thanks a lot! Reference: http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters

    Read the article

  • JDBC connection for a background thread being closed accessing in Websphere

    - by ferrari fan
    Hi, I have an application running in Websphere Portal Server inside of Websphere Application Server 6.0 (WAS). In this application for one particular functionality that takes a long time to complete, I am firing a new thread that performs this action. This new thread opens a new Session from Hibernate and starts performing DB transactions with it. Sometimes (haven't been able to see a pattern), the transactions inside the thread work fine and the process completes successfully. Other times however I get the errors below: org.hibernate.exception.GenericJDBCException: could not load an entity: [OBJECT NAME#218294] ... Caused by: com.ibm.websphere.ce.cm.ObjectClosedException: DSRA9110E: Connection is closed. Method cleanup failed while trying to execute method cleanup on ManagedConnection WSRdbManagedConnectionImpl@642aa0d8 from resource jdbc/MyJDBCDataSource. Caught exception: com.ibm.ws.exception.WsException: DSRA0080E: An exception was received by the Data Store Adapter. See original exception message: Cannot call 'cleanup' on a ManagedConnection while it is still in a transaction.. How can I stop this from happening? Why does it seem that WAS wants to kill my connections even though they're not done. Is there a way I can stop WAS from attempting to close this particular connection? Thanks

    Read the article

  • Data Binding to an object in C#

    - by Allen
    Objective-c/cocoa offers a form of binding where a control's properties (ie text in a textbox) can be bound to the property of an object. I am trying to duplicate this functionality in C# w/ .Net 3.5. I have created the following very simple class in the file MyClass.cs: class MyClass { private string myName; public string MyName { get { return myName; } set { myName = value; } } public MyClass() { myName = "Allen"; } } I also created a simple form with 1 textbox and 1 button. I init'd one instance of Myclass inside the form code and built the project. Using the DataSource Wizard in Vs2008, i selected to create a data source based on object, and selected the MyClass assembly. This created a datasource entity. I changed the databinding of the textbox to this datasource; however, the expected result (that the textbox's contents would be "allen") was not achieved. Further, putting text into the textbox is not updating the name property of the object. I know i'm missing something fundamental here. At some point i should have to tie my instance of the MyClass class that i initialized inside the form code to the textbox, but that hasn't occurred. Everything i've looked at online seems to gloss over using DataBinding with an object (or i'm missing the mark entirely), so any help is great appreciated. ----Edit--- Utilizing what i learned by the answers, i looked at the code generated by Visual Studio, it had the following: this.myClassBindingSource.DataSource = typeof(BindingTest.MyClass); if i comment that out and substitute : this.myClassBindingSource.DataSource = new MyClass(); i get the expected behavior. Why is the default code generated by VS like it is? Assuming this is more correct than the method that works, how should i modify my code to work within the bounds of what VS generated?

    Read the article

  • Websphere 7 EntityManagerFactory creation problem

    - by mihaela
    Hello, I'm working on a maven project which uses seam 2.2.0, hibernate 3.5.0-CR-2 as JPA provider, DB2 as database server and Websphere 7 as application server. Now I'm facing de following problem: In my EJBs that are seen also as SEAM components I want to use the EntityManager from EJB container (@PersistenceContext private EntityManager em) not Seam's EntityManager (@In private EntityManager em). But this is the problem, I cannot obtain an EntityManager using @PersistenceContext. On server logs it sais that it cannot create an EntityManagerFactory and gets a ClassCastException: java.lang.ClassCastException: org.hibernate.ejb.HibernatePersistence incompatible with javax.persistence.spi.PersistenceProvider After a lot of debugging and searching on forums I'm assuming that the problem is that Websphere doesn't use the Hibernate JPA provider. Has anyone faced this problem and has a solution? I configured already WAS class loader order for my application to load the classes with the application class loader first and I\ve packed all necessary jars in application ear as written in: WAS InfoCenter: Features for EJB 3.0 development . If necessary I'll post my persistence.xml, components.xml files and stack trace. I've found this problem discussed also here: Websphere EntityManagerFactory creation problem Hibernate 3.3 fail to create entity manager factory in Websphere 7.0. Please help Any hint will be useful. Thanks in advance! Mihaela

    Read the article

  • Does RabbitMq do round-robin from the exchange to the queues

    - by Lancelot
    Hi, I am currently evaluating message queue systems and RabbitMq seems like a good candidate, so I'm digging a little more into it. To give a little context I'm looking to have something like one exchange load balancing the message publishing to multiple queues. I don't want to replicate the messages, so a fanout exchange is not an option. Also the reason I'm thinking of having multiple queues vs one queue handling the round-robin w/ the consumers, is that I don't want our single point of failure to be at the queue level. Sounds like I could add some logic on the publisher side to simulate that behavior by editing the routing key and having the appropriate bindings in place. But that's kind of a passive approach that wouldn't take the pace of the message consumption on each queue into account, potentially leading to fill up one queue if the consumer applications for that queue are dead. I was looking for a more pro-active way from the exchange entity side, that would decide where to send the next message based on each queue size or something of that nature. I read about Alice and the available RESTful APIs but that seems kind of a heavy duty solution to implement fast routing decisions. Anyone knows if round-robin between the exchange the queues is feasible w/ RabbitMQ then? Thanks.

    Read the article

  • Strange xml/html accent issue

    - by Ayrad
    I have an XML file that contains a message with html tags in it. The XML file is read by a java class that mails it to people. When the mail is received, the accents do not show. For example é doesn't show. I have tried &eacute; in the xml but it gives an error in eclipse saying that the entity has not been declared. I also tried simply inserting &#233; but that shows nothing in the final output. The 3rd thing I tried was using <![CDATA[é]]> but that broke the parser since it didn't output anything after it. However I noticed something weird. When i put something like this in the xml and added UTF-16 encoding <message>text bla bla blaa é&lt; it did ouput the é at the end like this bla bla blaa blaa é. EDIT <message>text bla bla blaa éé&lt; outputs ?é or just one é The file looks something like this: <?xml version="1.0"? encoding="UTF-16"> <message> &lt;b&gt;hello é &lt;/b&gt; </message> </xml> What gives?

    Read the article

  • An error has occurred opening extern DTD (w3.org, xhtml1-transitional.dtd). 503 Server Unavailable

    - by Cheeso
    I'm trying to do xpath queries over an xhtml document. The document looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html lang="en" xmlns="http://www.w3.org/1999/xhtml" xml:lang="en"> <head> .... </head> <body> ... </body> </html> Because the document includes various char entities (&nbsp; and so on), I need to use the DTD, in order to load it with an XmlReader. So my code looks like this: var reader = XmlReader.Create(sr, new XmlReaderSettings { ProhibitDtd = false }); But when I run this, it returns An error has occurred while opening external DTD 'http://www.w3.org/TR/xhtml1-transitional.dtd': The remote server returned an error: (503) Server Unavailable. Now, I know why I am getting the 503 error. W3C explained it very clearly. But I still want to validate the document. How can I validate with the DTD, and get the entity definitions, without hitting the w3.org website? related: - java.io.IOException: Server returned HTTP response code: 503

    Read the article

  • Django forms, inheritance and order of form fields

    - by Hannson
    I'm using Django forms in my website and would like to control the order of the fields. Here's how I define my forms: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): name = forms.CharField() The name is immutable and should only be listed when the entity is created. I use inheritance to add consistency and DRY principles. What happens which is not erroneous, in fact totally expected, is that the name field is listed last in the view/html but I'd like the name field to be on top of summary and description. I do realize that I could easily fix it by copying summary and description into create_form and loose the inheritance but I'd like to know if this is possible. Why? Imagine you've got 100 fields in edit_form and have to add 10 fields on the top in create_form - copying and maintaining the two forms wouldn't look so sexy then. (This is not my case, I'm just making up an example) So, how can I override this behavior? Edit: Apparently there's no proper way to do this without going through nasty hacks (fiddling with .field attribute). The .field attribute is a SortedDict (one of Django's internal datastructures) which doesn't provide any way to reorder key:value pairs. It does how-ever provide a way to insert items at a given index but that would move the items from the class members and into the constructor. This method would work, but make the code less readable. The only other way I see fit is to modify the framework itself which is less-than-optimal in most situations. In short the code would become something like this: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) self.fields.insert(0,'name',forms.CharField()) That shut me up :)

    Read the article

  • JPA merge fails due to duplicate key

    - by wobblycogs
    I have a simple entity, Code, that I need to persist to a MySQL database. public class Code implements Serializable { @Id private String key; private String description; ...getters and setters... } The user supplies a file full of key, description pairs which I read, convert to Code objects and then insert in a single transaction using em.merge(code). The file will generally have duplicate entries which I deal with by first adding them to a map keyed on the key field as I read them in. A problem arises though when keys differ only by case (for example: XYZ and XyZ). My map will, of course, contain both entries but during the merge process MySQL sees the two keys as being the same and the call to merge fails with a MySQLIntegrityConstraintViolationException. I could easily fix this by uppercasing the keys as I read them in but I'd like to understand exactly what is going wrong. The conclusion I have come to is that JPA considers XYZ and XyZ to be different keys but MySQL considers them to be the same. As such when JPA checks its list of known keys (or does whatever it does to determine whether it needs to perform an insert or update) it fails to find the previous insert and issuing another which then fails. Is this corrent? Is there anyway round this other than better filtering the client data? I haven't defined .equals or .hashCode on the Code class so perhaps this is the problem.

    Read the article

  • How to fetch distinct values in Core Data?

    - by Andy
    So in looking through Core Data Snippets, I found the following code: ... [request setEntity:entity]; [request setResultType:NSDictionaryResultType]; [request setReturnsDistinctValues:YES]; [request setPropertiesToFetch:[NSArray arrayWithObject:@"<#Attribute name#>"]]; // Execute the fetch NSError *error; id requestedValue = nil; // WTF? This isn't defined or used anywhere NSArray *objects = [managedObjectContext executeFetchRequest:request error:&error]; if (objects == nil) { // handle the error } This is great and seems perfect for what I need...but how does one actually use it? I assume since it's returning dictionaries, I need a key to get at the values - but where's the key defined? Is that the "id requestedValue = nil" line? If so, how does "requestedValue" become the key? Xcode gives me a compiler warning about an unused variable at the "requestedValue" declaration. I feel like I'm missing something here. Thanks in advance for any assistance you can offer.

    Read the article

  • OData / WCF Data Service - HTTP 500 Error

    - by Eric
    I have created an OData/WCF service using Visual Studio 2010 on Windows XP SP3 with all current patches installed. When I click on "view in browser", the service opens and I see the 3 tables from my EF model. However, when I add a table name ("Commands" in this case) to the end of the query string, rather than seeing the data from the table, I get an HTTP 500 error. (This error (HTTP 500 Internal Server Error) means that the website you are visiting had a server problem which prevented the webpage from displaying.). I have not only followed the examples from 2 sites, but have also tried running the sample application that the blog poster sent me (that works on his machine), and still am not having any luck. The blog post is at Exposing OData from an Entity Framework Model Does anyone have an idea why this is occurring and how to resolve it? Here is the output of the "View in Browser": <?xml version="1.0" encoding="utf-8" standalone="yes" ?> - <service xml:base="http://localhost:1883/VistaDBCommandService.svc/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:app="http://www.w3.org/2007/app" xmlns="http://www.w3.org/2007/app"> - <workspace> <atom:title>Default</atom:title> - <collection href="Commands"> <atom:title>Commands</atom:title> </collection> - <collection href="Databases"> <atom:title>Databases</atom:title> </collection> - <collection href="Statuses"> <atom:title>Statuses</atom:title> </collection> </workspace> </service> ============================= Thanks, Eric

    Read the article

  • Eclipse Galileo + Glassfish v3: JPADeployer NullPointerException on deploy

    - by bshacklett
    I've created a very simple "Enterprise Application" project with about 7 entity beans and one stateless session bean. I've also configured an instance of Glassfish v3 to run as my application server. Unfortunately, when I attempt to publish the EAR to Glassfish, I'm getting the following response: SEVERE: Exception while invoking class org.glassfish.persistence.jpa.JPADeployer prepare method java.lang.NullPointerException at org.glassfish.persistence.jpa.JPADeployer.prepare(JPADeployer.java:104) at com.sun.enterprise.v3.server.ApplicationLifecycle.prepareModule(ApplicationLifecycle.java:644) at org.glassfish.javaee.full.deployment.EarDeployer.prepareBundle(EarDeployer.java:269) at org.glassfish.javaee.full.deployment.EarDeployer.access$200(EarDeployer.java:79) at org.glassfish.javaee.full.deployment.EarDeployer$1.doBundle(EarDeployer.java:131) at org.glassfish.javaee.full.deployment.EarDeployer$1.doBundle(EarDeployer.java:129) at org.glassfish.javaee.full.deployment.EarDeployer.doOnBundles(EarDeployer.java:197) at org.glassfish.javaee.full.deployment.EarDeployer.doOnAllTypedBundles(EarDeployer.java:206) at org.glassfish.javaee.full.deployment.EarDeployer.doOnAllBundles(EarDeployer.java:232) at org.glassfish.javaee.full.deployment.EarDeployer.prepare(EarDeployer.java:129) at com.sun.enterprise.v3.server.ApplicationLifecycle.prepareModule(ApplicationLifecycle.java:644) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:296) at com.sun.enterprise.v3.server.ApplicationLifecycle.deploy(ApplicationLifecycle.java:183) at org.glassfish.deployment.admin.DeployCommand.execute(DeployCommand.java:272) at com.sun.enterprise.v3.admin.CommandRunnerImpl$1.execute(CommandRunnerImpl.java:305) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:320) at com.sun.enterprise.v3.admin.CommandRunnerImpl.doCommand(CommandRunnerImpl.java:1176) at com.sun.enterprise.v3.admin.CommandRunnerImpl.access$900(CommandRunnerImpl.java:83) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1235) at com.sun.enterprise.v3.admin.CommandRunnerImpl$ExecutionContext.execute(CommandRunnerImpl.java:1224) at com.sun.enterprise.v3.admin.AdminAdapter.doCommand(AdminAdapter.java:365) at com.sun.enterprise.v3.admin.AdminAdapter.service(AdminAdapter.java:204) at com.sun.grizzly.tcp.http11.GrizzlyAdapter.service(GrizzlyAdapter.java:166) at com.sun.enterprise.v3.server.HK2Dispatcher.dispath(HK2Dispatcher.java:100) at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:245) at com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:791) at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:693) at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:954) at com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:170) at com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:135) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:102) at com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:88) at com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:76) at com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:53) at com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:57) at com.sun.grizzly.ContextTask.run(ContextTask.java:69) at com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:330) at com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:309) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • How to test IO code in JUnit?

    - by add
    I'm want to test two services: service which builds file name service which writes some data into file provided by 1st service In first i'm building some complex file structure (just for example {user}/{date}/{time}/{generatedId}.bin) In second i'm writing data to the file passed by first service (1st service calls 2nd service) How can I test both services using mocks without making any real IO interractions? Just for example: 1st service: public class DefaultLogService implements LogService { public void log(SomeComplexData data) { serializer.write(new FileOutputStream(buildComplexFileStructure()), data); or serializer.write(buildComplexFileStructure(), data); or serializer.write(new GenericInputEntity(buildComplexFileStructure()), data); } private ComplextDataSerializer serializer; // mocked in tests } 2nd service: public class DefaultComplexDataSerializer implements ComplexDataSerializer { void write(InputStream stream, SomeComplexData data) {...} or void write(File file, SomeCompexData data) {...} or void write(GenericInputEntity entity, SomeComplexData data) {...} } In first case i need to pass FileOutputStream which will create a file (i.e. i can't test 1st service) In second case i need to pass File. What can i do in 2nd service test if I need to test data which will be written to specified file? (i can't test 2nd service) In third case i think i need some generic IO object which will wrap File. Maybe there is some ready-to-use solution for this purpose?

    Read the article

  • Jackson object mapping - map incoming JSON field to protected property in base class

    - by Pete
    We use Jersey/Jackson for our REST application. Incoming JSON strings get mapped to the @Entity objects in the backend by Jackson to be persisted. The problem arises from the base class that we use for all entities. It has a protected id property, which we want to exchange via REST as well so that when we send an object that has dependencies, hibernate will automatically fetch these dependencies by their ids. Howevery, Jackson does not access the setter, even if we override it in the subclass to be public. We also tried using @JsonSetter but to no avail. Probably Jackson just looks at the base class and sees ID is not accessible so it skips setting it... @MappedSuperclass public abstract class AbstractPersistable<PK extends Serializable> implements Persistable<PK> { @Id @GeneratedValue(strategy = GenerationType.AUTO) private PK id; public PK getId() { return id; } protected void setId(final PK id) { this.id = id; } Subclasses: public class A extends AbstractPersistable<Long> { private String name; } public class B extends AbstractPersistable<Long> { private A a; private int value; // getter, setter // make base class setter accessible @Override @JsonSetter("id") public void setId(Long id) { super.setId(id); } } Now if there are some As in our database and we want to create a new B via the REST resource: @POST @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) @Transactional public Response create(B b) { if (b.getA().getId() == null) cry(); } with a JSON String like this {"a":{"id":"1","name":"foo"},"value":"123"}. The incoming B will have the A reference but without an ID. Is there any way to tell Jackson to either ignore the base class setter or tell it to use the subclass setter instead? I've just found out about @JsonTypeInfo but I'm not sure this is what I need or how to use it. Thanks for any help!

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • MDX equivalent to SQL subqueries with aggregation

    - by James Lampe
    I'm new to MDX and trying to solve the following problem. Investigated calculated members, subselects, scope statements, etc but can't quite get it to do what I want. Let's say I'm trying to come up with the MDX equivalent to the following SQL query: SELECT SUM(netMarketValue) net, SUM(CASE WHEN netMarketValue > 0 THEN netMarketValue ELSE 0 END) assets, SUM(CASE WHEN netMarketValue < 0 THEN netMarketValue ELSE 0 END) liabilities, SUM(ABS(netMarketValue)) gross someEntity1 FROM ( SELECT SUM(marketValue) netMarketValue, someEntity1, someEntity2 FROM <some set of tables> GROUP BY someEntity1, someEntity2) t GROUP BY someEntity1 In other words, I have an account ledger where I hide internal offsetting transactions (within someEntity2), then calculate assets & liabilities after aggregating them by someEntity2. Then I want to see the grand total of those assets & liabilities aggregated by the bigger entity, someEntity1. In my MDX schema I'd presumably have a cube with dimensions for someEntity1 & someEntity2, and marketValue would be my fact table/measure. I suppose i could create another DSV that did what my subquery does (calculating net), and simply create a cube with that as my measure dimension, but I wonder if there is a better way. I'd rather not have 2 cubes (one for these net calculations and another to go to a lower level of granularity for other use cases), since it will be a lot of duplicate info in my database. These will be very large cubes.

    Read the article

  • Loading Liferay Properties from Spring IoC container (to get jdbc connection parameters)

    - by mox601
    I'm developing some portlets for Liferay Portal 5.2.3 with bundled tomcat 6.0.18 using Spring IoC container. I need to map the User_ table used in Liferay database to an entity with Hibernate, so I need to use two different dataSources to separate the liferay db from the db used by portlets. My jdbc.properties has to hold all connection parameters for both databases: no problem for the one used by portlets, but I am having issues determining which database uses liferay to hold its data. My conclusion is that i should have something like this: liferayConnection.url=jdbc:hsqldb:${liferay.home}/data/hsql/lportal in order to get the database url dynamically loaded, according to Liferay properties found in portal-ext.properties. (Or, better, load the whole portal-ext.properties and read database properties from there). The problem is that the placeholder is not resolved: Caused by: org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'liferayDataSource' defined in class path resource [WEB-INF/applicationContext.xml]: Could not resolve placeholder 'liferay.home' To dodge this problem I tried to load explicitly portal-ext.properties with a Spring bean: <bean id="liferayPropertiesConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="../../portal-ext.properties"/> but no luck: liferay.home is not resolved but there aren't other errors. How can I resolve the placeholder defined by Liferay? Thanks

    Read the article

< Previous Page | 204 205 206 207 208 209 210 211 212 213 214 215  | Next Page >