Search Results

Search found 7116 results on 285 pages for 'nested queries'.

Page 248/285 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Need to add WHERE condition to query

    - by Angel Carlson
    I am trying to modify edit_orders.php in Zen Cart. Hoping someone might be able to help me add a condition to a query. I need the queries below to specify that the items selected from TABLE_PRODUCTS_DESCRIPTION and TABLE_CATEGORIES_DESCRIPTION must have a language_id = 1. Would be so grateful for any help you could provide. // ############################################################################ // Get List of All Products // ############################################################################ //$result = zen_db_query("SELECT products_name, p.products_id, x.categories_name, ptc.categories_id FROM " . TABLE_PRODUCTS . " p LEFT JOIN " . TABLE_PRODUCTS_DESCRIPTION . " pd ON pd.products_id=p.products_id LEFT JOIN " . TABLE_PRODUCTS_TO_CATEGORIES . " ptc ON ptc.products_id=p.products_id LEFT JOIN " . TABLE_CATEGORIES_DESCRIPTION . " cd ON cd.categories_id=ptc.categories_id LEFT JOIN " . TABLE_CATEGORIES_DESCRIPTION . " x ON x.categories_id=ptc.categories_id ORDER BY categories_id"); $result = $db -> Execute("SELECT products_name, p.products_id, categories_name, ptc.categories_id FROM " . TABLE_PRODUCTS . " p LEFT JOIN " . TABLE_PRODUCTS_DESCRIPTION . " pd ON pd.products_id=p.products_id LEFT JOIN " . TABLE_PRODUCTS_TO_CATEGORIES . " ptc ON ptc.products_id=p.products_id LEFT JOIN " . TABLE_CATEGORIES_DESCRIPTION . " cd ON cd.categories_id=ptc.categories_id ORDER BY categories_name");

    Read the article

  • SQL query to get latest record for all distinct items in a table

    - by David Buckley
    I have a table of all sales defined like: mysql> describe saledata; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | SaleDate | datetime | NO | | NULL | | | StoreID | bigint(20) unsigned | NO | | NULL | | | Quantity | int(10) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+ I need to get the last sale price for all items (as the price may change). I know I can run a query like: SELECT price FROM saledata WHERE itemID = 1234 AND storeID = 111 ORDER BY saledate DESC LIMIT 1 However, I want to be able to get the last sale price for all items (the ItemIDs are stored in a separate item table) and insert them into a separate table. How can I get this data? I've tried queries like this: SELECT storeID, itemID, price FROM saledata WHERE itemID IN (SELECT itemID from itemmap) ORDER BY saledate DESC LIMIT 1 and then wrap that into an insert, but it's not getting the proper data. Is there one query I can run to get the last price for each item and insert that into a table defined like: mysql> describe lastsale; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | StoreID | bigint(20) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+

    Read the article

  • How should rules for Aggregate Roots be enforced?

    - by MylesRip
    While searching the web, I came across a list of rules from Eric Evans' book that should be enforced for aggregates: The root Entity has global identity and is ultimately responsible for checking invariants Root Entities have global identity. Entities inside the boundary have local identity, unique only within the Aggregate. Nothing outside the Aggregate boundary can hold a reference to anything inside, except to the root Entity. The root Entity can hand references to the internal Entities to other objects, but they can only use them transiently (within a single method or block). Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal. Objects within the Aggregate can hold references to other Aggregate roots. A delete operation must remove everything within the Aggregate boundary all at once When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. This all seems fine in theory, but I don't see how these rules would be enforced in the real world. Take rule 3 for example. Once the root entity has given an exteral object a reference to an internal entity, what's to keep that external object from holding on to the reference beyond the single method or block? (If the enforcement of this is platform-specific, I would be interested in knowing how this would be enforced within a C#/.NET/NHibernate environment.)

    Read the article

  • Reversible numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • query structure - ignoring entries for the same event from multiple users?

    - by Andrew Heath
    One table in my MySQL database tracks game plays. It has the following structure: SCENARIO_VICTORIES [ID] [scenario_id] [game] [timestamp] [user_id] [winning_side] [play_date] ID is the autoincremented primary key. timestamp records the moment of submission for the record. winning_side has one of three possible values: 1, 2, or 0 (meaning a draw) One of the queries done on this table calculates the victory percentage for each scenario, when that scenario's page is viewed. The output is expressed as: Side 1 win % Side 2 win % Draw % and queried with: SELECT winning_side, COUNT(scenario_id) FROM scenario_victories WHERE scenario_id='$scenID' GROUP BY winning_side ORDER BY winning_side ASC and then processed into the percentages and such. Sorry for the long setup. My problem is this: several of my users play each other, and record their mutual results. So these battles are being doubly represented in the victory percentages and result counts. Though this happens infrequently, the userbase isn't large and the double entries do have a noticeable effect on the data. Given the table and query above - does anyone have any suggestions for how I can "collapse" records that have the same play_date & game & scenario_id & winning_side so that they're only counted once?

    Read the article

  • Why does SQLite take such a long time to fetch the data?

    - by Derk
    I have two possible queries, both giving the result set I want. Query one takes about 30ms, but 150ms to fetch the data from the database. SELECT id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) AND EXISTS ( SELECT 1 FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) Query two takes about 3ms -this is the one I therefore prefer-, but also takes 170ms to fetch the data. SELECT ( SELECT prod2.value FROM product_to_value, product_to_value as prod2, features, featurevalues WHERE product_to_value.feature = features.int AND product_to_value.value = featurevalues.id AND features.id = ? AND featurevalues.id IN (?,?) AND product_to_value.product = prod2.product AND prod2.value = featval3.id ) as id FROM featurevalues as featval3 WHERE featval3.feature IN (?,?,?,?) The 170ms seems to be related to the number of rows from table featval3. After an index is used on featval3.feature IN (?,?,?,?), 151 items "remain" in featval3. Is there something obvious I am missing regarding the slow fetching? As far as I know everything is properly indexed.. I am confused because the second query only takes a blazing 3ms to run.

    Read the article

  • django dynamically deduce SITE_ID according to the domain

    - by dcrodjer
    I am trying to develop a site which will render multiple customized sites according to the domain name (subdomain to be more precise). My all the domain names are redirected to the So for each site there will be a corresponding model which defines how the site should look (SITE - SITE_SETTINGS) What will be the best way to utilize the django sites framework to get the SITE_ID of the current site from the domain name instead of hard-coding it in the settings files (django sites documentation) and run database queries, render the views accordingly? If using multiple settings file is my only option can this (wsgi script handle domain name) be done? Update So finally, following lukes answer, what I will do is define a custom middleware which makes the views available with the important vars required according to the domain. And as far as sitemaps and comments is concerned, I will have to customize sitemaps app and a custom sites model on which the other models of sites will be based. And since the comments system is based on the hard-coded sitemap ID I can use it just as is on the models (models will already be filtered according to the site based on my sites framework) though the permalink feature will have to be customized. So a lot of customization. Please suggest if I am going wrong anywhere in this because I have to ensure that the features of the project are optimized. Thanks!

    Read the article

  • Dealing with SQLException with spring,hibernate & Postgres

    - by mad
    Hi im working on a project using HibernateDaoSUpport from my Daos from Spring & spring-ws & hibernate & postgres who will be used in a national application (means a lot of users) Actually, every exception from hibernate is automatically transformed into some specific Spring dataAccesException. I have a table with a keyword on the dabatase & a unique constraint on the keywords : no duplicate keywords is allowed. I have found twows ways to deal with with that in the Insert Dao: 1- Check for the duplicate manually (with a select) prior to doing your insert. I means that the spring transaction will have a SERIALIZABLE isolation level. The obvious drawback is that we have now 2 queries for a simple insert.Advantage: independent of the database 2-let the insert gone & catch the SqlException & convert it to a userfriendly message & errorcode to the final consumer of our webservices. Solution 2: Spring has developped a way to translate specific exeptions into customized exceptions. see http://www.oracle.com/technology/pub/articles/marx_spring.html In my case i would have a ConstraintViolationException. Ideally i would like to write a custom SQLExceptionTranslator to map the duplicate word constraint in the database with a DuplicateWordException. But i can have many unique constraints on the same table. So i have to get the message of the SQLEXceptions in order to find the name of the constraint declared in the create table "uq_duplicate-constraint" for example. Now i have a strong dependency with the database. Thanks in advance for your answers & excuse me for my poor english (it is not my mother tongue)

    Read the article

  • Numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • Creating reusable chunks of Linq to SQL

    - by tia
    Hi, I am trying to break up horrible linq to sql queries to make them a bit more readable. Say I want to return all orders for product which in the previous year had more than 100 orders. So, original query is: from o in _context.Orders where (from o1 in _context.Orders where o1.Year == o.Year - 1 && o1.Product == o.Product select o1).Count() > 100 select o; Messy. What I'd like to be able to do is to be able to break it down, eg: private IEnumerable<Order> LastSeasonOrders(Order order) { return (from o in _context.Orders where o.Year == order.Year - 1 && o.Product == order.Product select o); } which then lets me change the original query to: from o in _context.Orders where LastSeasonOrders(o).Count() > 100 select o; This doesn't seem to work, as I get method Blah has no supported translation to SQL when the query is run. Any quick tips on the correct way to achieve this?

    Read the article

  • Linq generic Expression in query on "element" or on IQueryable (multiple use)

    - by Bogdan Maxim
    Hi, I have the following expression public static Expression<Func<T, bool>> JoinByDateCheck<T>(T entity, DateTime dateToCheck) where T : IDateInterval { return (entityToJoin) => entityToJoin.FromDate.Date <= dateToCheck.Date && (entityToJoin.ToDate == null || entityToJoin.ToDate.Value.Date >= dateToCheck.Date); } IDateInterval interface is defined like this: interface IDateInterval { DateTime FromDate {get;} DateTime? ToDate {get;} } and i need to apply it in a few ways: (1) Query on Linq2Sql Table: var q1 = from e in intervalTable where FunctionThatCallsJoinByDateCheck(e, constantDateTime) select e; or something like this: intervalTable.Where(FunctionThatCallsJoinByDateCheck(e, constantDateTime)) (2) I need to use it in some table joins (as linq2sql doesn't provide comparative join): var q2 = from e1 in t1 join e2 in t2 on e1.FK == e2.PK where OtherFunctionThatCallsJoinByDateCheck(e2, e1.FromDate) or var q2 = from e1 in t1 from e2 in t2 where e1.FK == e2.PK && OtherFunctionThatCallsJoinByDateCheck(e2, e1.FromDate) (3) I need to use it in some queries like this: var q3 = from e in intervalTable.FilterFunctionThatCallsJoinByDateCheck(constantDate); Dynamic linq is not something that I can use, so I have to stick to plain linq. Thank you Clarification: Initially I had just the last method (FilterFunctionThatCallsJoinByDateCheck(this IQueryable<IDateInterval> entities, DateTime dateConstant) ) that contained the code from the expression. The problem is that I get a SQL Translate exception if I write the code in a method and call it like that. All I want is to extend the use of this function to the where clause (see the second query in point 2)

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • Modifying SQL Database on Shared Hosting

    - by apocalypse9
    I have a live database on a shared hosting server. I am making some major changes to my site's code and I would like to fix some stupid mistakes I made in initially designing the database. These changes involve altering the size of a large number of fields, and enforcing referential integrity between tables properly. I would like to make the changes on both my local test server and the remote server if possible. I should note that while I'm fairly comfortable with writing complex queries to handle data, I have very little experience modifying database structure without a graphical interface. I can access the remote database in the visual studio database explorer but I can not use that for anything other than data manipulation. I installed Sql Management Studio express last night and after 40+ crashes I gave up - I couldn't even patch the damn thing. The remote server is SQL 2005 / The MyLittleAdmin web interface is available. So my question is what is the best way to accomplish these changes. Is there a graphical interface I can use on the remote server? If not is there an easy way to copy the database to my local machine, fix it, and re upload? Finally if none of the above are viable does anyone have links to a decent info on fixing referential integrity via query? Sorry for the somewhat general question - I feel like I am making this far harder than it should be but after searching / trying all night i haven't gotten anywhere. Thanks in advance for the help. I really appreciate it. ...Also does anyone have a time machine I can borrow- I need to go kick my past self's ass for this.

    Read the article

  • Fetching just the Key/id from a ReferenceProperty in App Engine

    - by ozone
    Hi SO, I could use a little help in AppEngine land... Using the [Python] API I create relationships like this example from the docs: class Author(db.Model): name = db.StringProperty() class Story(db.Model): author = db.ReferenceProperty(Author) story = db.get(story_key) author_name = story.author.name As I understand it, that example will make two datastore queries. One to fetch the Story and then one to deference the Author inorder to access the name. But I want to be able to fetch the id, so do something like: story = db.get(story_key) author_id = story.author.key().id() I want to just get the id from the reference. I do not want to have to deference (therefore query the datastore) the ReferenceProperty value. From reading the documentation it says that the value of a ReferenceProperty is a Key Which leads me to think that I could just call .id() on the reference's value. But it also says: The ReferenceProperty model provides features for Key property values such as automatic dereferencing. I can't find anything that explains when this referencing takes place? Is it safe to call .id() on the ReferenceProperty's value? Can it be assumed that calling .id() will not cause a datastore lookup?

    Read the article

  • Attach an entity that is not new, perhaps having been loaded from another DataContext. LINQ to SQL -

    - by soldieraman
    Alright How I got this error I got one application sitting on a server 2 users accessing this application - doing some bulk data processing . eg. entering values and then the application is working with another system to extract values for them and then saving. I can't recreate the error The error logs show: The error happend at the same time in both the application Both happend on a Attach/Submit (but two different functions) There is no way they are using the same DataContext object as I save the DataContext in the HttpContext.Items My hunch / guess is: One datacontext was not refreshed i.e. the an object was created for the same item twice as it was new in both the forms. eg. Customer Number - a customer was created (as one couldn't be found) by one datacontext - the other one couldn't find it either (i am using compiled queries to find it in the datacontext) so it created another object and on attaching failed. The HttpContext.Items lost its value somehow (i am using a virtual pc as server - maybe something went wrong there) I am going more of the second as I can't recreate the error - but it just might be a timing (for attach/save) thing - also the error makes me think of the 2nd too.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • How to track auto-generated id's in select-insert statement

    - by k rey
    I have two tables detail and head. The detail table will be written first. Later, the head table will be written. The head is a summary of the detail table. I would like to keep a reference from the detail to the head table. I have a solution but it is not elegant and requires duplicating the joins and filters that were used during summation. I am looking for a better solution. The below is an example of what I currently have. In this example, I have simplified the table structure. In the real world, the summation is very complex. -- Preparation create table #detail ( detail_id int identity(1,1) , code char(4) , amount money , head_id int null ); create table #head ( head_id int identity(1,1) , code char(4) , subtotal money ); insert into #detail ( code, amount ) values ( 'A', 5 ); insert into #detail ( code, amount ) values ( 'A', 5 ); insert into #detail ( code, amount ) values ( 'B', 2 ); insert into #detail ( code, amount ) values ( 'B', 2 ); -- I would like to somehow simplify the following two queries insert into #head ( code, subtotal ) select code, sum(amount) from #detail group by code update #detail set head_id = h.head_id from #detail d inner join #head h on d.code = h.code -- This is the desired end result select * from #detail Desired end result of detail table: detail_id code amount head_id 1 A 5.00 1 2 A 5.00 1 3 B 2.00 2 4 B 2.00 2

    Read the article

  • PHP OOP: Avoid Singleton/Static Methods in Domain Model Pattern

    - by sunwukung
    I understand the importance of Dependency Injection and its role in Unit testing, which is why the following issue is giving me pause: One area where I struggle not to use the Singleton is the Identity Map/Unit of Work pattern (Which keeps tabs on Domain Object state). //Not actual code, but it should demonstrate the point class Monitor{//singleton construction omitted for brevity static $members = array();//keeps record of all objects static $dirty = array();//keeps record of all modified objects static $clean = array();//keeps record of all clean objects } class Mapper{//queries database, maps values to object fields public function find($id){ if(isset(Monitor::members[$id]){ return Monitor::members[$id]; } $values = $this->selectStmt($id); //field mapping process omitted for brevity $Object = new Object($values); Monitor::new[$id]=$Object return $Object; } $User = $UserMapper->find(1);//domain object is registered in Id Map $User->changePropertyX();//object is marked "dirty" in UoW // at this point, I can save by passing the Domain Object back to the Mapper $UserMapper->save($User);//object is marked clean in UoW //but a nicer API would be something like this $User->save(); //but if I want to do this - it has to make a call to the mapper/db somehow $User->getBlogPosts(); //or else have to generate specific collection/object graphing methods in the mapper $UserPosts = $UserMapper->getBlogPosts(); $User->setPosts($UserPosts); Any advice on how you might handle this situation? I would be loathe to pass/generate instances of the mapper/database access into the Domain Object itself to satisfy DI - At the same time, avoiding that results in lots of calls within the Domain Object to external static methods. Although I guess if I want "save" to be part of its behaviour then a facility to do so is required in its construction. Perhaps it's a problem with responsibility, the Domain Object shouldn't be burdened with saving. It's just quite a neat feature from the Active Record pattern - it would be nice to implement it in some way.

    Read the article

  • Is this possible: JPA/Hibernate query with list property in result ?

    - by Kdeveloper
    In hibernate I want to run this JPQL / HQL query: select new org.test.userDTO( u.id, u.name, u.securityRoles) FROM User u WHERE u.name = :name userDTO class: public class UserDTO { private Integer id; private String name; private List<SecurityRole> securityRoles; public UserDTO(Integer id, String name, List<SecurityRole> securityRoles) { this.id = id; this.name = name; this.securityRoles = securityRoles; } ...getters and setters... } User Entity: @Entity public class User { @id private Integer id; private String name; @ManyToMany @JoinTable(name = "user_has_role", joinColumns = { @JoinColumn(name = "user_id") }, inverseJoinColumns = {@JoinColumn(name = "security_role_id") } ) private List<SecurityRole> securityRoles; ...getters and setters... } But when Hibernate 3.5 (JPA 2) starts I get this error: org.hibernate.hql.ast.QuerySyntaxException: Unable to locate appropriate constructor on class [org.test.UserDTO] [SELECT NEW org.test.UserDTO (u.id, u.name, u.securityRoles) FROM nl.test.User u WHERE u.name = :name ] Is a select that includes a list as a result not possible? Should I just create 2 seperate queries?

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • How to create view model without sorting collections in memory.

    - by Chevex
    I have a view model (below). public class TopicsViewModel { public Topic Topic { get; set; } public Reply LastReply { get; set; } } I want to populate an IQueryable<TopicsViewModel> with values from my IQueryable<Topic> collection and IQueryable<Reply> collection. I do not want to use the attached entity collection (i.e. Topic.Replies) because I only want the last reply for that topic and doing Topic.Replies.Last() loads the entire entity collection in memory and then grabs the last one in the list. I am trying to stay in IQueryable so that the query is executed in the database. I also don't want to foreach through topics and query replyRepository.Replies because looping through IQueryable<Topic> will start the lazy loading. I'd prefer to build one expression and have all the leg work done in the lower layers. I have the following: IQueryable<TopicsViewModel> topicsViewModel = from x in topicRepository.Topics from y in replyRepository.Replies where y.TopicID == x.TopicID orderby y.PostedDate ascending select new TopicsViewModel { Topic = x, LastReply = y }; But this isn't working. Any ideas how I can populate an IQueryable or IEnumerable of TopicsViewModel so that it queries the database and grabs topics and that topic's last reply? I am trying really hard to avoid grabbing all replies related to that topic. I only want to grab the last reply. Thank you for any insight you have to offer.

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Trouble creating a SQL query

    - by JoBu1324
    I've been thinking about how to compose this SQL query for a while now, but after thinking about it for a few hours I thought I'd ask the SO community to see if they have any ideas. Here is a mock up of the relevant portion of the tables: contracts id date ar (yes/no) term payments contract_id payment_date The object of the query is to determine, per month, how many payments we expect, vs how many payments we received. conditions for expecting a payment Expected payments begin on contracts.term months after contracts.date, if contracts.ar is "yes". Payments continue to be expected until the month after the first missed payment. There is one other complication to this: payments might be late, but they need to show up as if they were paid on the date expected. The data is all there, but I've been having trouble wrapping my head around the SQL query. I am not an SQL guru - I merely have a decent amount of experience handling simpler queries. I'd like to avoid filtering the results in code, if possible - but without your help that may be what I have to do. Expected Output Month Expected Payments Received Payments January 500 450 February 498 478 March 234 211 April 987 789 ... SQL Fiddle I've created an SQL Fiddle: http://sqlfiddle.com/#!2/a2c3f/2

    Read the article

  • Do entity collections and object sets implement IQueryable<T>?

    - by Chevex
    I am using Entity Framework for the first time and noticed that the entities object returns entity collections. DBEntities db = new DBEntities(); db.Users; //Users is an ObjectSet<User> User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? user.Posts; //Posts is an EntityCollection<Post> Post post = user.Posts.Where(x => x.PostID == "123").First(); //Is this getting executed in the SQL or in memory? Do both ObjectSet and EntityCollection implement IQueryable? I am hoping they do so that I know the queries are getting executed at the data source and not in memory. EDIT: So apparently EntityCollection does not while ObjectSet does. Does that mean I would be better off using this code? DBEntities db = new DBEntities(); User user = db.Users.Where(x => x.Username == "test").First(); //Is this getting executed in the SQL or in memory? Post post = db.Posts.Where(x => (x.PostID == "123")&&(x.Username == user.Username)).First(); // Querying the object set instead of the entity collection. Also, what is the difference between ObjectSet and EntityCollection? Shouldn't they be the same? Thanks in advance!

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >