Search Results

Search found 49727 results on 1990 pages for 'multiple select query'.

Page 211/1990 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • Linux: 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. I'll post the exact error message here soon, but it's very useless. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment. Thanks, Nick

    Read the article

  • "merging" multiple internet connections

    - by Spencer R
    I've seen this question asked several times here on SF, but I'm looking for some updated information; specifically concerning Server 2012. I'm in the process of buying a home so I'm trying to get some plans together on how I want to structure my network. Internet speeds aren't the greatest and connections can be unreliable where the house is so I was thinking of having two DSL lines installed. My question is, how could I leverage those two connections to create the best network I can, in terms of speed and reliability. My parents will be moving in with me - they consume a lot of bandwidth as it is, but then add my internet traffic to it, and I'm headed for a lot of frustration. I thought I remember reading somewhere that Server 2012 has some new functionality to utilize multiple connections on multiple NICs in a way that wasn't possible in earlier versions of Server. Not sure if Windows will work but, I'm an application developer and spend the majority of my time in Windows environments. However, I've only recently returned to the Windows world, so I'd like my main server at home to run Win Server 2012 so that I can become more familiar with it.

    Read the article

  • Asterisk relay between multiple subnets

    - by immoune
    I wonder what's the best way to go when you have phones on multiple networks which are not directly reachable. I have 3 networks 10.3.x.x 10.6.x.x 10.17.x.x My asterisk server resides on the 10.3.0.5 IP. The machines from the 10.6 and 10.17 networks are routed here through VPN tunnels. At this point we don't talk about NAT anywhere on the network just pure routing. Since the 10.3.0.5 PBX has routes back to all the subnet's it has no problem to communicate with softphones/hardphones from these ranges. The problem comes from that Asterisk (as far as I understand) only responsible for the SIP communication part not the Audio/Video transmission which is in P2P fashion done between the devices. So although a client using sipdroid from 10.6.x.x is able to connect to the pbx (10.3.0.5) and dial a bria client on the 10.17.x.x network once the phone rings out and the call establishes no audio will be transmitted simply because it has no way to directly connect there. For this there are multiple solutions described in this text: http://msdn.microsoft.com/en-us/library/ee480411%28v=winembedded.60%29.aspx What I would prefer is to keep these networks segregated as they are now. What would be the best solution? Is it possible to actually relay through all the audio/video information through the Asterisk server? That would be the best in my case, I using Astlinux there which has a lot of other parts. Thanks

    Read the article

  • 3 Monitor PCI-e Graphics card (without tremendous pain)?

    - by N Rahl
    As we are all painfully aware, the only way to get multiple monitors AND compositing (Compiz) on Linux is to use a single graphics card that can drive both (or in my case all three) screens. I bought a Radeon 5750 specifically because it claims to able to drive 3 monitors. I can plug in 3 monitors (2 DVI, 1 HDMI) and the Catalyst Control Center shows all 3, but only 2 can be enabled at a time. The exact message is: The current settings cannot be applied. Possible issues may include: - Display(s) cannot be enabled. - Setting(s) cannot be applied due to insufficient video memory. So I'm going to assume that either the 5750 doesn't support 3 monitors, OR, more likely, ATI couldn't be bothered to add that support to their Linux drivers. So this is a multipart question: First, can anyone suggest a PCI Express Graphics card that can run 3 screens on linux without tremendous pain? I'm looking for something where you install the driver and all three screens "just work". Does such a card exist? Second, if you have a 5750, have you been able to get it to do 3 monitors? I'm running Ubuntu 10.04 at the moment.

    Read the article

  • Managing records of bugs and notes

    - by Jim
    Hi. I want to create a knowledgebase for a piece of software. I'd also like to be able to track bugs and common points of failure in that application. Linking knowledgebase articles to bug records would be a real boon, as would the ability to do complex queries for particular articles and bugs on the basis of tags or metadata. I've never done anything like this before, and like to install as little as possible. I've been looking at creating a wiki with Wiki On A Stick, and it seems to offer a lot. But I can't make complex queries. I can create pages that list all 'articles' with a particular single tag, but I can't specify multiple tags or filters. Is there any software that can help? I don't want to spend money until I've tried something out thoroughly, and I'd ideally like something that demands little-to-no installation. Are there any tools that can help me? If something could easily export its data, or stored data in XML, that would be a real plus too. Otherwise, are there any simple apps that allow me to set up forms for bugs, store data as XML then query and process that XML on demand? Thanks in advance.

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • On HP Mini, unable to select 800x600 resolution

    - by Roboto
    I have an HP Mini laptop. I can only make resolution setting for my display of 1024x576. The HP Deskjet 6988 driver only allows resolution settings of 800x600. I don't care how 800x600 would look on my laptop, I only want to install the driver for the printer and set it back. I went into the registry, but it was showing a resolution setting of 800x600. How else can I set the resolution or at least add the option in my Display Properties for 800x600?

    Read the article

  • Linux filesystem suggestion for MySQL with a 100% SELECT workload

    - by gmemon
    I have a MySQL database that contains millions of rows per table and there are 9 tables in total. The database is fully populated, and all I am doing is reads i.e., there are no INSERTs or UPDATEs. Data is stored in MyISAM tables. Given this scenario, which linux file system would work best? Currently, I have xfs. But, I read somewhere that xfs has horrible read performance. Is that true? Should I shift the database to an ext3 file system? Thanks

    Read the article

  • Passing integer lists in a sql query, best practices

    - by Artiom Chilaru
    I'm currently looking at ways to pass lists of integers in a SQL query, and try to decide which of them is best in which situation, what are the benefots of each, and what are the pitfalls, what should be avoided :) Right now I know of 3 ways that we currently use in our application. 1) Table valued parameter: Create a new Table Valued Parameter in sql server: CREATE TYPE [dbo].[TVP_INT] AS TABLE( [ID] [int] NOT NULL ) Then run the query against it: using (var conn = new SqlConnection(DataContext.GetDefaultConnectionString)) { var comm = conn.CreateCommand(); comm.CommandType = CommandType.Text; comm.CommandText = @" UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN @values IDs ON DA.ID = IDs.ID"; comm.Parameters.Add(new SqlParameter("values", downloadResults.Select(d => d.ID).ToDataTable()) { TypeName = "TVP_INT" }); conn.Open(); comm.ExecuteScalar(); } The major disadvantages of this method is the fact that Linq doesn't support table valued params (if you create an SP with a TVP param, linq won't be able to run it) :( 2) Convert the list to Binary and use it in Linq! This is a bit better.. Create an SP, and you can run it within linq :) To do this, the SP will have an IMAGE parameter, and we'll be using a user defined function (udf) to convert this to a table.. We currently have implementations of this function written in C++ and in assembly, both have pretty much the same performance :) Basically, each integer is represented by 4 bytes, and passed to the SP. In .NET we have an extension method that convers an IEnumerable to a byte array The extension method: public static Byte[] ToBinary(this IEnumerable intList) { return ToBinaryEnum(intList).ToArray(); } private static IEnumerable<Byte> ToBinaryEnum(IEnumerable<Int32> intList) { IEnumerator<Int32> marker = intList.GetEnumerator(); while (marker.MoveNext()) { Byte[] result = BitConverter.GetBytes(marker.Current); Array.Reverse(result); foreach (byte b in result) yield return b; } } The SP: CREATE PROCEDURE [Accounts-UpdateImportAttempts] @values IMAGE AS BEGIN UPDATE DA SET [tsLastImportAttempt] = CURRENT_TIMESTAMP FROM [Account] DA JOIN dbo.udfIntegerArray(@values, 4) IDs ON DA.ID = IDs.Value4 END And we can use it by running the SP directly, or in any linq query we need using (var db = new DataContext()) { db.Accounts_UpdateImportAttempts(downloadResults.Select(d => d.ID).ToBinary()); // or var accounts = db.Accounts .Where(a => db.udfIntegerArray(downloadResults.Select(d => d.ID).ToBinary(), 4) .Select(i => i.Value4) .Contains(a.ID)); } This method has the benefit of using compiled queries in linq (which will have the same sql definition, and query plan, so will also be cached), and can be used in SPs as well. Both these methods are theoretically unlimited, so you can pass millions of ints at a time :) 3) The simple linq .Contains() It's a more simple approach, and is perfect in simple scenarios. But is of course limited by this. using (var db = new DataContext()) { var accounts = db.Accounts .Where(a => downloadResults.Select(d => d.ID).Contains(a.ID)); } The biggest drawback of this method is that each integer in the downloadResults variable will be passed as a separate int.. In this case, the query is limited by sql (max allowed parameters in a sql query, which is a couple of thousand, if I remember right). So I'd like to ask.. What do you think is the best of these, and what other methods and approaches have I missed?

    Read the article

  • Please help fix and optimize this query

    - by user607217
    I am working on a system to find potential duplicates in our customers table (SQL 2005). I am using the built-in SOUNDEX value that our software computes when customers are added/updated, but I also implemented the double metaphone algorithm for better matching. This is the most-nested query I have created, and I can't help but think there is a better way to do it and I'd like to learn. In the inner-most query I am joining the customer table to the metaphone table I created, then finding customers that have identical pKey (primary phonetic key). I take that, union that with customers that have matching soundex values, and then proceed to score those matches with various text similarity functions. This is currently working, but I would also like to add a union of customers whose aKey (alternate phonetic key) match. This would be identical to "QUERY A" except to substitute on (c1Akey = c2Akey) for the join. However, when I attempt to include that, I get errors when I try to execute my query. Here is the code: --Create aggregate ranking select c1Name, c2Name, nDiff, c1Addr, c2Addr, aDiff, c1SSN, c2SSN, sDiff, c1DOB, c2DOB, dDiff, nDiff+aDiff+dDiff+sDiff as Score ,(sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 as [Rank] FROM ( --Create match scores for different fields SELECT c1Name, c2Name, c1Addr, c2Addr, c1SSN, c2SSN, c1LTD, c2LTD, c1DOB, c2DOB, dbo.Jaro(c1name, c2name) AS nDiff, dbo.JaroWinkler(c1addr, c2addr) AS aDiff, CASE WHEN c1dob = '1901-01-01' OR c2dob = '1901-01-01' OR c1dob = '1800-01-01' OR c2dob = '1800-01-01' THEN .5 ELSE dbo.SmithWaterman(c1dob, c2dob) END AS dDiff, CASE WHEN c1ssn = '000-00-0000' OR c2ssn = '000-00-0000' THEN .5 ELSE dbo.Jaro(c1ssn, c2ssn) END AS sDiff FROM -- Generate list of possible matches based on multiple phonetic matching fields ( select * from -- List of similar names from pKey field of ##Metaphone table --QUERY A BEGIN (select customers.custno as c1Custno, name as c1Name, haddr as c1Addr, ssn as c1SSN, lasttripdate as c1LTD, dob as c1DOB, soundex as c1Soundex, pkey as c1Pkey, akey as c1Akey from Customers WITH (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c1 JOIN (select customers.custno as c2Custno, name as c2Name, haddr as c2Addr, ssn as c2SSN, lasttripdate as c2LTD, dob as c2DOB, soundex as c2Soundex, pkey as c2Pkey, akey as c2Akey from Customers with (nolock) join ##Metaphone on customers.custno = ##Metaphone.custno) as c2 on (c1Pkey = c2Pkey) and (c1Custno < c2Custno) WHERE (c1Name <> 'PARENT, GUARDIAN') and c1soundex != c2soundex --QUERY A END union --List of similar names from pregenerated SOUNDEX field (select t1.custno, t1.name, t1.haddr, t1.ssn, t1.lasttripdate, t1.dob, t1.[soundex], 0, 0, t2.custno, t2.name, t2.haddr, t2.ssn, t2.lasttripdate, t2.dob, t2.[soundex], 0, 0 from Customers t1 WITH (nolock) join customers t2 with (nolock) on t1.[soundex] = t2.[soundex] and t1.custno < t2.custno where (t1.name <> 'PARENT, GUARDIAN')) ) as a ) as b where (sDiff+dDiff)*1.5 + (nDiff+dDiff)*1.5 + (nDiff+sDiff)*1.5 + aDiff *.5 + nDiff *.5 >= 7.5 order by [rank] desc, score desc Previously, I was using joins such as on c1.pkey = c2.pkey or c1.akey = c2.akey or c1.soundex = c2.soundex but the performance was horrendous, and using unions seems to be working a lot better. Out of 103K customers, tt is currently generating a list of 8.5M potential matches (based on the phonetic codes) in 2.25 minutes, and then taking another 2 to score, rank and filter those down to about 3000. So I am happy with the performance, I just can't help but think there is a better way to structure this, and I need help adding the extra union condition. Thanks!

    Read the article

  • (SQL) Selecting from a database based on multiple pairs of pairs

    - by Owen Allen
    The problem i've encountered is attempting to select rows from a database where 2 columns in that row align to specific pairs of data. IE selecting rows from data where id = 1 AND type = 'news'. Obviously, if it was 1 simple pair it would be easy, but the issue is we are selecting rows based on 100s of pair of data. I feel as if there must be some way to do this query without looping through the pairs and querying each individually. I'm hoping some SQL stackers can provide guidance. Here's a full code break down: Lets imagine that I have the following dataset where history_id is the primary key. I simplified the structure a bit regarding the dates for ease of reading. table: history history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 If the user wants to select rows from the database based on a date range we would take a subset of that data. SELECT history_id, id, type, user_id, date FROM history WHERE date BETWEEN '5/3' AND '5/5' Which returns the following dataset history_id id type user_id date 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 Now, using that subset of data I need to determine how many of those entries represent the first entry in the database for each type,id pairing. IE is row 4 the first time in the database that id: 3, type: news appears. So I use a with() min() query. In real code the two lists are programmatically generated from the result sets of our previous query, here I spelled them out for ease of reading. WITH previous AS ( SELECT history_id, id, type FROM history WHERE id IN (1,2,3,4) AND type IN ('news','photo') ) SELECT min(history_id) as history_id, id, type FROM previous GROUP BY id, type Which returns the following data set. history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 You'll notice it's the entire original dataset, because we are matching id and type individually in lists, rather than as a collective pairs. The result I desire is, but I can't figure out the SQL to get this result. history_id id type user_id date 1 1 news 1 5/1 4 3 news 1 5/3 5 4 news 1 5/3 7 2 photo 1 5/4 Obviously, I could go the route of looping through each pair and querying the database to determine it's first result, but that seems an inefficient solution. I figured one of the SQL gurus on this site might be able to spread some wisdom. In case I'm approaching this situation incorrectly, the gist of the whole routine is that the database stores all creations and edits in the same table. I need to track each users behavior and determine how many entries in the history table are edits or creations over a specific date range. Therefore I select all type:id pairs from the date range based on a user_id, and then for each pairing I determine if the user is responsible for the first that occurs in the database. If first, then creation else edit. Any assistance would be awesome.

    Read the article

  • Projections.count() and Projections.countDistinct() both result in the same query

    - by Kim L
    EDIT: I've edited this post completely, so that the new description of my problem includes all the details and not only what I previously considered relevant. Maybe this new description will help to solve the problem I'm facing. I have two entity classes, Customer and CustomerGroup. The relation between customer and customer groups is ManyToMany. The customer groups are annotated in the following way in the Customer class. @Entity public class Customer { ... @ManyToMany(mappedBy = "customers", fetch = FetchType.LAZY) public Set<CustomerGroup> getCustomerGroups() { ... } ... public String getUuid() { return uuid; } ... } The customer reference in the customer groups class is annotated in the following way @Entity public class CustomerGroup { ... @ManyToMany public Set<Customer> getCustomers() { ... } ... public String getUuid() { return uuid; } ... } Note that both the CustomerGroup and Customer classes also have an UUID field. The UUID is a unique string (uniqueness is not forced in the datamodel, as you can see, it is handled as any other normal string). What I'm trying to do, is to fetch all customers which do not belong to any customer group OR the customer group is a "valid group". The validity of a customer group is defined with a list of valid UUIDs. I've created the following criteria query Criteria criteria = getSession().createCriteria(Customer.class); criteria.setProjection(Projections.countDistinct("uuid")); criteria = criteria.createCriteria("customerGroups", "groups", Criteria.LEFT_JOIN); List<String> uuids = getValidUUIDs(); Criterion criterion = Restrictions.isNull("groups.uuid"); if (uuids != null && uuids.size() > 0) { criterion = Restrictions.or(criterion, Restrictions.in( "groups.uuid", uuids)); } criteria.add(criterion); When executing the query, it will result in the following SQL query select count(*) as y0_ from Customer this_ left outer join CustomerGroup_Customer customergr3_ on this_.id=customergr3_.customers_id left outer join CustomerGroup groups1_ on customergr3_.customerGroups_id=groups1_.id where groups1_.uuid is null or groups1_.uuid in ( ?, ? ) The query is exactly what I wanted, but with one exception. Since a Customer can belong to multiple CustomerGroups, left joining the CustomerGroup will result in duplicated Customer objects. Hence the count(*) will give a false value, as it only counts how many results there are. I need to get the amount of unique customers and this I expected to achieve by using the Projections.countDistinct("uuid"); -projection. For some reason, as you can see, the projection will still result in a count(*) query instead of the expected count(distinct uuid). Replacing the projection countDistinct with just count("uuid") will result in the exactly same query. Am I doing something wrong or is this a bug? === "Problem" solved. Reason: PEBKAC (Problem Exists Between Keyboard And Chair). I had a branch in my code and didn't realize that the branch was executed. That branch used rowCount() instead of countDistinct().

    Read the article

  • Grails - ElasticSearch - QueryParsingException[[index] No query registered for [query]]; with elasticSearchHelper; JSON via curl works fine though

    - by v1p
    I have been working on a Grails project, clubbed with ElasticSearch ( v 20.6 ), with a custom build of elasticsearch-grails-plugin(to support geo_point indexing : v.20.6) have been trying to do a filtered Search, while using script_fields (to calculate distance). Following is Closure & the generated JSON from the GXContentBuilder : Closure records = Domain.search(searchType:'dfs_query_and_fetch'){ query { filtered = { query = { if(queryTxt){ query_string(query: queryTxt) }else{ match_all {} } } filter = { geo_distance = { distance = "${userDistance}km" "location"{ lat = latlon[0]?:0.00 lon = latlon[1]?:0.00 } } } } } script_fields = { distance = { script = "doc['location'].arcDistanceInKm($latlon)" } } fields = ["_source"] } GXContentBuilder generated query JSON : { "query": { "filtered": { "query": { "match_all": {} }, "filter": { "geo_distance": { "distance": "5km", "location": { "lat": "37.752258", "lon": "-121.949886" } } } } }, "script_fields": { "distance": { "script": "doc['location'].arcDistanceInKm(37.752258, -121.949886)" } }, "fields": ["_source"] } The JSON query, using curl-way, works perfectly fine. But when I try to execute it from Groovy Code, I mean with this : elasticSearchHelper.withElasticSearch { Client client -> def response = client.search(request).actionGet() } It throws following error : Failed to execute phase [dfs], total failure; shardFailures {[1][index][3]: SearchParseException[[index][3]: from[0],size[60]: Parse Failure [Failed to parse source [{"from":0,"size":60,"query_binary":"eyJxdWVyeSI6eyJmaWx0ZXJlZCI6eyJxdWVyeSI6eyJtYXRjaF9hbGwiOnt9fSwiZmlsdGVyIjp7Imdlb19kaXN0YW5jZSI6eyJkaXN0YW5jZSI6IjVrbSIsImNvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiI6eyJsYXQiOiIzNy43NTIyNTgiLCJsb24iOiItMTIxLjk0OTg4NiJ9fX19fSwic2NyaXB0X2ZpZWxkcyI6eyJkaXN0YW5jZSI6eyJzY3JpcHQiOiJkb2NbJ2NvbXBhbnkuYWRkcmVzcy5sb2NhdGlvbiddLmFyY0Rpc3RhbmNlSW5LbSgzNy43NTIyNTgsIC0xMjEuOTQ5ODg2KSJ9fSwiZmllbGRzIjpbIl9zb3VyY2UiXX0=","explain":true}]]]; nested: QueryParsingException[[index] No query registered for [query]]; } The above Closure works if I only use filtered = { ... } script_fields = { ... } but it doesn't return the calculated distance. Anyone had any similar problem ? Thanks in advance :) It's possible that I might have been dim to point out the obvious here :P

    Read the article

  • How to create a simple adf dashboard application with EJB 3.0

    - by Rodrigues, Raphael
    In this month's Oracle Magazine, Frank Nimphius wrote a very good article about an Oracle ADF Faces dashboard application to support persistent user personalization. You can read this entire article clicking here. The idea in this article is to extend the dashboard application. My idea here is to create a similar dashboard application, but instead ADF BC model layer, I'm intending to use EJB3.0. There are just a one small trick here and I'll show you. I'm using the HR usual oracle schema. The steps are: 1. Create a ADF Fusion Application with EJB as a layer model 2. Generate the entities from table (I'm using Department and Employees only) 3. Create a new Session Bean. I called it: HRSessionEJB 4. Create a new method like that: public List getAllDepartmentsHavingEmployees(){ JpaEntityManager jpaEntityManager = (JpaEntityManager)em.getDelegate(); Query query = jpaEntityManager.createNamedQuery("Departments.allDepartmentsHavingEmployees"); JavaBeanResult.setQueryResultClass(query, AggregatedDepartment.class); return query.getResultList(); } 5. In the Departments entity, create a new native query annotation: @Entity @NamedQueries( { @NamedQuery(name = "Departments.findAll", query = "select o from Departments o") }) @NamedNativeQueries({ @NamedNativeQuery(name="Departments.allDepartmentsHavingEmployees", query = "select e.department_id, d.department_name , sum(e.salary), avg(e.salary) , max(e.salary), min(e.salary) from departments d , employees e where d.department_id = e.department_id group by e.department_id, d.department_name")}) public class Departments implements Serializable {...} 6. Create a new POJO called AggregatedDepartment: package oramag.sample.dashboard.model; import java.io.Serializable; import java.math.BigDecimal; public class AggregatedDepartment implements Serializable{ @SuppressWarnings("compatibility:5167698678781240729") private static final long serialVersionUID = 1L; private BigDecimal departmentId; private String departmentName; private BigDecimal sum; private BigDecimal avg; private BigDecimal max; private BigDecimal min; public AggregatedDepartment() { super(); } public AggregatedDepartment(BigDecimal departmentId, String departmentName, BigDecimal sum, BigDecimal avg, BigDecimal max, BigDecimal min) { super(); this.departmentId = departmentId; this.departmentName = departmentName; this.sum = sum; this.avg = avg; this.max = max; this.min = min; } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setSum(BigDecimal sum) { this.sum = sum; } public BigDecimal getSum() { return sum; } public void setAvg(BigDecimal avg) { this.avg = avg; } public BigDecimal getAvg() { return avg; } public void setMax(BigDecimal max) { this.max = max; } public BigDecimal getMax() { return max; } public void setMin(BigDecimal min) { this.min = min; } public BigDecimal getMin() { return min; } } 7. Create the util java class called JavaBeanResult. The function of this class is to configure a native SQL query to return POJOs in a single line of code using the utility class. Credits: http://onpersistence.blogspot.com.br/2010/07/eclipselink-jpa-native-constructor.html package oramag.sample.dashboard.model.util; /******************************************************************************* * Copyright (c) 2010 Oracle. All rights reserved. * This program and the accompanying materials are made available under the * terms of the Eclipse Public License v1.0 and Eclipse Distribution License v. 1.0 * which accompanies this distribution. * The Eclipse Public License is available at http://www.eclipse.org/legal/epl-v10.html * and the Eclipse Distribution License is available at * http://www.eclipse.org/org/documents/edl-v10.php. * * @author shsmith ******************************************************************************/ import java.lang.reflect.Constructor; import java.lang.reflect.InvocationTargetException; import java.util.ArrayList; import java.util.List; import javax.persistence.Query; import org.eclipse.persistence.exceptions.ConversionException; import org.eclipse.persistence.internal.helper.ConversionManager; import org.eclipse.persistence.internal.sessions.AbstractRecord; import org.eclipse.persistence.internal.sessions.AbstractSession; import org.eclipse.persistence.jpa.JpaHelper; import org.eclipse.persistence.queries.DatabaseQuery; import org.eclipse.persistence.queries.QueryRedirector; import org.eclipse.persistence.sessions.Record; import org.eclipse.persistence.sessions.Session; /*** * This class is a simple query redirector that intercepts the result of a * native query and builds an instance of the specified JavaBean class from each * result row. The order of the selected columns musts match the JavaBean class * constructor arguments order. * * To configure a JavaBeanResult on a native SQL query use: * JavaBeanResult.setQueryResultClass(query, SomeBeanClass.class); * where query is either a JPA SQL Query or native EclipseLink DatabaseQuery. * * @author shsmith * */ public final class JavaBeanResult implements QueryRedirector { private static final long serialVersionUID = 3025874987115503731L; protected Class resultClass; public static void setQueryResultClass(Query query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); DatabaseQuery databaseQuery = JpaHelper.getDatabaseQuery(query); databaseQuery.setRedirector(javaBeanResult); } public static void setQueryResultClass(DatabaseQuery query, Class resultClass) { JavaBeanResult javaBeanResult = new JavaBeanResult(resultClass); query.setRedirector(javaBeanResult); } protected JavaBeanResult(Class resultClass) { this.resultClass = resultClass; } @SuppressWarnings("unchecked") public Object invokeQuery(DatabaseQuery query, Record arguments, Session session) { List results = new ArrayList(); try { Constructor[] constructors = resultClass.getDeclaredConstructors(); Constructor javaBeanClassConstructor = null; // (Constructor) resultClass.getDeclaredConstructors()[0]; Class[] constructorParameterTypes = null; // javaBeanClassConstructor.getParameterTypes(); List rows = (List) query.execute( (AbstractSession) session, (AbstractRecord) arguments); for (Object[] columns : rows) { boolean found = false; for (Constructor constructor : constructors) { javaBeanClassConstructor = constructor; constructorParameterTypes = javaBeanClassConstructor.getParameterTypes(); if (columns.length == constructorParameterTypes.length) { found = true; break; } // if (columns.length != constructorParameterTypes.length) { // throw new ColumnParameterNumberMismatchException( // resultClass); // } } if (!found) throw new ColumnParameterNumberMismatchException( resultClass); Object[] constructorArgs = new Object[constructorParameterTypes.length]; for (int j = 0; j < columns.length; j++) { Object columnValue = columns[j]; Class parameterType = constructorParameterTypes[j]; // convert the column value to the correct type--if possible constructorArgs[j] = ConversionManager.getDefaultManager() .convertObject(columnValue, parameterType); } results.add(javaBeanClassConstructor.newInstance(constructorArgs)); } } catch (ConversionException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalArgumentException e) { throw new ColumnParameterMismatchException(e); } catch (InstantiationException e) { throw new ColumnParameterMismatchException(e); } catch (IllegalAccessException e) { throw new ColumnParameterMismatchException(e); } catch (InvocationTargetException e) { throw new ColumnParameterMismatchException(e); } return results; } public final class ColumnParameterMismatchException extends RuntimeException { private static final long serialVersionUID = 4752000720859502868L; public ColumnParameterMismatchException(Throwable t) { super( "Exception while processing query results-ensure column order matches constructor parameter order", t); } } public final class ColumnParameterNumberMismatchException extends RuntimeException { private static final long serialVersionUID = 1776794744797667755L; public ColumnParameterNumberMismatchException(Class clazz) { super( "Number of selected columns does not match number of constructor arguments for: " + clazz.getName()); } } } 8. Create the DataControl and a jsf or jspx page 9. Drag allDepartmentsHavingEmployees from DataControl and drop in your page 10. Choose Graph > Type: Bar (Normal) > any layout 11. In the wizard screen, Bars label, adds: sum, avg, max, min. In the X Axis label, adds: departmentName, and click in OK button 12. Run the page, the result is showed below: You can download the workspace here . It was using the latest jdeveloper version 11.1.2.2.

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • Fun with Aggregates

    - by Paul White
    There are interesting things to be learned from even the simplest queries.  For example, imagine you are given the task of writing a query to list AdventureWorks product names where the product has at least one entry in the transaction history table, but fewer than ten. One possible query to meet that specification is: SELECT p.Name FROM Production.Product AS p JOIN Production.TransactionHistory AS th ON p.ProductID = th.ProductID GROUP BY p.ProductID, p.Name HAVING COUNT_BIG(*) < 10; That query correctly returns 23 rows (execution plan and data sample shown below): The execution plan looks a bit different from the written form of the query: the base tables are accessed in reverse order, and the aggregation is performed before the join.  The general idea is to read all rows from the history table, compute the count of rows grouped by ProductID, merge join the results to the Product table on ProductID, and finally filter to only return rows where the count is less than ten. This ‘fully-optimized’ plan has an estimated cost of around 0.33 units.  The reason for the quote marks there is that this plan is not quite as optimal as it could be – surely it would make sense to push the Filter down past the join too?  To answer that, let’s look at some other ways to formulate this query.  This being SQL, there are any number of ways to write logically-equivalent query specifications, so we’ll just look at a couple of interesting ones.  The first query is an attempt to reverse-engineer T-SQL from the optimized query plan shown above.  It joins the result of pre-aggregating the history table to the Product table before filtering: SELECT p.Name FROM ( SELECT th.ProductID, cnt = COUNT_BIG(*) FROM Production.TransactionHistory AS th GROUP BY th.ProductID ) AS q1 JOIN Production.Product AS p ON p.ProductID = q1.ProductID WHERE q1.cnt < 10; Perhaps a little surprisingly, we get a slightly different execution plan: The results are the same (23 rows) but this time the Filter is pushed below the join!  The optimizer chooses nested loops for the join, because the cardinality estimate for rows passing the Filter is a bit low (estimate 1 versus 23 actual), though you can force a merge join with a hint and the Filter still appears below the join.  In yet another variation, the < 10 predicate can be ‘manually pushed’ by specifying it in a HAVING clause in the “q1” sub-query instead of in the WHERE clause as written above. The reason this predicate can be pushed past the join in this query form, but not in the original formulation is simply an optimizer limitation – it does make efforts (primarily during the simplification phase) to encourage logically-equivalent query specifications to produce the same execution plan, but the implementation is not completely comprehensive. Moving on to a second example, the following query specification results from phrasing the requirement as “list the products where there exists fewer than ten correlated rows in the history table”: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) < 10 ); Unfortunately, this query produces an incorrect result (86 rows): The problem is that it lists products with no history rows, though the reasons are interesting.  The COUNT_BIG(*) in the EXISTS clause is a scalar aggregate (meaning there is no GROUP BY clause) and scalar aggregates always produce a value, even when the input is an empty set.  In the case of the COUNT aggregate, the result of aggregating the empty set is zero (the other standard aggregates produce a NULL).  To make the point really clear, let’s look at product 709, which happens to be one for which no history rows exist: -- Scalar aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709;   -- Vector aggregate SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = 709 GROUP BY th.ProductID; The estimated execution plans for these two statements are almost identical: You might expect the Stream Aggregate to have a Group By for the second statement, but this is not the case.  The query includes an equality comparison to a constant value (709), so all qualified rows are guaranteed to have the same value for ProductID and the Group By is optimized away. In fact there are some minor differences between the two plans (the first is auto-parameterized and qualifies for trivial plan, whereas the second is not auto-parameterized and requires cost-based optimization), but there is nothing to indicate that one is a scalar aggregate and the other is a vector aggregate.  This is something I would like to see exposed in show plan so I suggested it on Connect.  Anyway, the results of running the two queries show the difference at runtime: The scalar aggregate (no GROUP BY) returns a result of zero, whereas the vector aggregate (with a GROUP BY clause) returns nothing at all.  Returning to our EXISTS query, we could ‘fix’ it by changing the HAVING clause to reject rows where the scalar aggregate returns zero: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID HAVING COUNT_BIG(*) BETWEEN 1 AND 9 ); The query now returns the correct 23 rows: Unfortunately, the execution plan is less efficient now – it has an estimated cost of 0.78 compared to 0.33 for the earlier plans.  Let’s try adding a redundant GROUP BY instead of changing the HAVING clause: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY th.ProductID HAVING COUNT_BIG(*) < 10 ); Not only do we now get correct results (23 rows), this is the execution plan: I like to compare that plan to quantum physics: if you don’t find it shocking, you haven’t understood it properly :)  The simple addition of a redundant GROUP BY has resulted in the EXISTS form of the query being transformed into exactly the same optimal plan we found earlier.  What’s more, in SQL Server 2008 and later, we can replace the odd-looking GROUP BY with an explicit GROUP BY on the empty set: SELECT p.Name FROM Production.Product AS p WHERE EXISTS ( SELECT * FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ); I offer that as an alternative because some people find it more intuitive (and it perhaps has more geek value too).  Whichever way you prefer, it’s rather satisfying to note that the result of the sub-query does not exist for a particular correlated value where a vector aggregate is used (the scalar COUNT aggregate always returns a value, even if zero, so it always ‘EXISTS’ regardless which ProductID is logically being evaluated). The following query forms also produce the optimal plan and correct results, so long as a vector aggregate is used (you can probably find more equivalent query forms): WHERE Clause SELECT p.Name FROM Production.Product AS p WHERE ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) < 10; APPLY SELECT p.Name FROM Production.Product AS p CROSS APPLY ( SELECT NULL FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () HAVING COUNT_BIG(*) < 10 ) AS ca (dummy); FROM Clause SELECT q1.Name FROM ( SELECT p.Name, cnt = ( SELECT COUNT_BIG(*) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID GROUP BY () ) FROM Production.Product AS p ) AS q1 WHERE q1.cnt < 10; This last example uses SUM(1) instead of COUNT and does not require a vector aggregate…you should be able to work out why :) SELECT q.Name FROM ( SELECT p.Name, cnt = ( SELECT SUM(1) FROM Production.TransactionHistory AS th WHERE th.ProductID = p.ProductID ) FROM Production.Product AS p ) AS q WHERE q.cnt < 10; The semantics of SQL aggregates are rather odd in places.  It definitely pays to get to know the rules, and to be careful to check whether your queries are using scalar or vector aggregates.  As we have seen, query plans do not show in which ‘mode’ an aggregate is running and getting it wrong can cause poor performance, wrong results, or both. © 2012 Paul White Twitter: @SQL_Kiwi email: [email protected]

    Read the article

  • How to autosum value on dropdown and radio select

    - by Wilf
    I'm working on an auto calculation form which is a total column will change after a radio and a dropdown is clicked. I can make the total change for both dropdowns but the problem occurs when I tried to add a radio option. Here is my code. HTML Ages 10+: <select id="Adult" name="Adult"> <option selected="selected" value="0">0</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> </select> <br />Ages 3-9: <select id="Child" name="Child"> <option selected="selected" value="0">0</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> </select> <br />Food <input type="radio" name="food" id="food0" value="0" /> <label for="food0">No</label> <input type="radio" name="food" id="food1" value="10" /> <label for="food1">Yes</label> <table width="100%" border="1" align="center"> <tr> <td>Product</td> <td>Ages 10+</td> <td>Ages 3-9</td> <td>Food</td> <td>Price</td> </tr> <tr> <td>2 Day Ticket</td> <td>$235.00</td> <td>$223.00</td> <td><span id="food">0</span> </td> <td>$<span class="amount" id="2DayTotal"></span> </td> </tr> <tr> <td>3 Day Ticket</td> <td>$301.00</td> <td>$285.00</td> <td><span id="food">0</span> </td> <td>$<span class="amount" id="3DayTotal"></span> </td> </tr> <tr> <td>4 Day Ticket</td> <td>$315.00</td> <td>$298.00</td> <td><span id="food">0</span> </td> <td>$<span class="amount" id="4DayTotal"></span> </td> </tr> <tr> <td>5 Day Ticket</td> <td>$328.00</td> <td>$309.00</td> <td><span id="food">0</span> </td> <td>$<span class="amount" id="5DayTotal"></span> </td> </tr> </table> JavaScript var numAdult = 0; var numChild = 0; $("#Adult").change(function () { numAdult = $("#Adult").val(); calcTotals(); }); $("#Child").change(function () { numChild = $("#Child").val(); calcTotals(); }); $('input[type=radio]').change(function(evt) { $('#food').html($(this).val()); }); function calcTotals() { $("#2DayTotal").text(235 * numAdult + 223 * numChild); $("#3DayTotal").text(301 * numAdult + 285 * numChild); $("#4DayTotal").text(315 * numAdult + 298 * numChild); $("#5DayTotal").text(328 * numAdult + 309 * numChild); } The issues are: I'd like the food column change to it's value when a radio is click. It works only the first id. After a radio is clicked. A fumction calcTotals() is called to sum an additional food cost. Demo here : http://jsfiddle.net/4Jegn/178/ Please be advice.

    Read the article

  • INNER JOIN syntax for mySQL using phpmyadmin

    - by David van Dugteren
    SELECT Question.userid, user.uid FROM `question` WHERE NOT `userid`=2 LIMIT 0, 60 INNER JOIN `user` ON `question`.userid=`user`.uid ORDER BY `question`.userid returns Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'INNER JOIN User ON question.userid=user.uid ORDER BY question.userid' at line 5 Can't for the life of me figure out what I'm doing wrong here.

    Read the article

  • Geolocation SQL query not finding exact location

    - by Iridium52
    I have been testing my geolocation query for some time now and I haven't found any issues with it until now. I am trying to search for all cities within a given radius, often times I'm searching for cities surrounding a city using that city's coords, but recently I tried searching around a city and found that the city itself was not returned. I have these cities as an excerpt in my database: city latitude longitude Saint-Mathieu 45.316708 -73.516253 Saint-Édouard 45.233374 -73.516254 Saint-Michel 45.233374 -73.566256 Saint-Rémi 45.266708 -73.616257 But when I run my query around the city of Saint-Rémi, with the following query... SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate((degrees(acos( sin(radians(tblcity.latitude)) * sin(radians(45.266708)) + cos(radians(tblcity.latitude)) * cos(radians(45.266708)) * cos(radians(tblcity.longitude - -73.616257) ) ) ) * 69.09*1.6),1) as distance FROM tblcity HAVING distance < 10 ORDER BY distance desc I get these results: city latitude longitude distance Saint-Mathieu 45.316708 -73.516253 9.5 Saint-Édouard 45.233374 -73.516254 8.6 Saint-Michel 45.233374 -73.566256 5.3 The town of Saint-Rémi is missing from the search. So I tried a modified query hoping to get a better result: SELECT tblcity.city, tblcity.latitude, tblcity.longitude, truncate(( 6371 * acos( cos( radians( 45.266708 ) ) * cos( radians( tblcity.latitude ) ) * cos( radians( tblcity.longitude ) - radians( -73.616257 ) ) + sin( radians( 45.266708 ) ) * sin( radians( tblcity.latitude ) ) ) ),1) AS distance FROM tblcity HAVING distance < 10 ORDER BY distance desc But I get the same result... However, if I modify Saint-Rémi's coords slighly by changing the last digit of the lat or long by 1, both queries will return Saint-Rémi. Also, if I center the query on any of the other cities above, the searched city is returned in the results. Can anyone shed some light on what may be causing my queries above to not display the searched city of Saint-Rémi? I have added a sample of the table (with extra fields removed) below. I'm using MySQL 5.0.45, thanks in advance. CREATE TABLE `tblcity` ( `IDCity` int(1) NOT NULL auto_increment, `City` varchar(155) NOT NULL default '', `Latitude` decimal(9,6) NOT NULL default '0.000000', `Longitude` decimal(9,6) NOT NULL default '0.000000', PRIMARY KEY (`IDCity`) ) ENGINE=MyISAM AUTO_INCREMENT=52743 DEFAULT CHARSET=latin1 AUTO_INCREMENT=52743; INSERT INTO `tblcity` (`city`, `latitude`, `longitude`) VALUES ('Saint-Mathieu', 45.316708, -73.516253), ('Saint-Édouard', 45.233374, -73.516254), ('Saint-Michel', 45.233374, -73.566256), ('Saint-Rémi', 45.266708, -73.616257);

    Read the article

  • Advanced Django query with subselects and custom JOINS

    - by Bryan Ward
    I have been investigating this number theoretic function (found in the Height model) and I need to query for things based on the prime factorization of the primary key, or id. I have created a model for Factors of the id which maintains all of the prime factors. class Height(models.Model): b = models.IntegerField(null=True, blank=True) c = models.IntegerField(null=True, blank=True) d = models.FloatField(null=True, blank=True) class Factors(models.Model): height = models.ForeignKey(Height, null=True, blank=True) factor = models.IntegerField(null=True, blank=True) degree = models.IntegerField(null=True, blank=True) prime_id = models.IntegerField(null=True, blank=True) For example, if id=24, then the associated entries in the factors table would be height_id=24,factor=2,degree=3,prime_id=0 height_id=24,factor=3,degree=1,prime_id=1 the prime_id keep track of the relative order of the primes. Now let p < q < r < s all be prime numbers and a,b,c,d be positive integers. Then I want to be able to query for all Heights of the form id=(p**a)*(q**b)*(r**c)*(s**d). Now this is simple in the case that all of p,q,r,s,a,b,c,d are known in that I can just run Height.objects.get(id=(p**a)*(q**b)*(r**c)*(s**d)) But I need to be able to query for something like (2**a)*(3**2)*(r**c)*(s**d) where r,s,a,d are unknown and all Heights of such form will be returned. Furthermore, not all of the rows in Height will have exactly four prime factors, so I need to make sure that I am not matching rows of the form id=(p**a)*(q**b)*(r**c)*(s**d)*(t**e)... From what I can tell, the following MySQL query accomplishes this, but I would like to do it through the Django ORM. I also don't know if this MySQL query is the proper way to go about doing things. SELECT h.*,count(f.height_id) AS factorsCount FROM height AS h LEFT JOIN factors AS f ON ( f.height_id = h.id AND f.height_id IN (SELECT height_id FROM factors where prime_id=1 AND factor=2 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=2 AND factor=3 AND degree=2) AND f.height_id IN (SELECT height_id FROM factors where prime_id=3 AND factor=5 AND degree=1) AND f.height_id IN (SELECT height_id FROM factors where prime_id=4 AND factor=7 ANd degree=1) ) GROUP BY h.id HAVING factorsCount=4 ORDER BY h.id; Any ideas or suggestions for things to try?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • Rails scalar query

    - by Craig
    I need to display a UI element (e.g. a star or checkmark) for employees that are 'favorites' of the current user (another employee). The Employee model has the following relationship defined to support this: has_and_belongs_to_many :favorites, :class_name => "Employee", :join_table => "favorites", :association_foreign_key => "favorite_id", :foreign_key => "employee_id" The favorites has two fields: employee_id, favorite_id. If I were to write SQL, the following query would give me the results that I want: SELECT id, account, IF( ( SELECT favorite_id FROM favorites WHERE favorite_id=p.id AND employee_id = ? ) IS NULL, FALSE, TRUE) isFavorite FROM employees Where the '?' would be replaced by the session[:user_id]. How do I represent the isFavorite scalar query in Rails? Another approach would use a query like this: SELECT id, account, IF(favorite_id IS NULL, FALSE, TRUE) isFavorite FROM employees e LEFT OUTER JOIN favorites f ON e.id=f.favorite_id AND employee_id = ? Again, the '?' is replaced by the session[:user_id] value. I've had some success writing this in Rails: ee=Employee.find(:all, :joins=>"LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=1", :select=>"employees.*,favorites.favorite_id") Unfortunately, when I try to make this query 'dynamic' by replacing the '1' with a '?', I get errors. ee=Employee.find(:all, :joins=>["LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?",1], :select=>"employees.*,favorites.favorite_id") Obviously, I have the syntax wrong, but can :joins expressions be 'dynamic'? Is this a case for a Lambda expression? I do hope to add other filters to this query and use it with will_paginate and acts_as_taggable_on, if that makes a difference. edit errors from trying to make :joins dynamic: ActiveRecord::ConfigurationError: Association named 'LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?' was not found; perhaps you misspelled it? from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1906:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1911:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `each' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1830:in `initialize' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `new' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `add_joins!' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1686:in `construct_finder_sql' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1548:in `find_every' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:615:in `find'

    Read the article

  • PHP form values after POST in dropdown

    - by FFish
    I have a form with 'selected' values pulled from the database. Now I want the user to edit the values. When the data is send I want to show the new values. When I submit my form I always get the 'green' value? What am I doing wrong here? <?php // pulled from db $color = "blue"; // update if (isset($_POST['Submit'])) { echo "write to db: " . $_POST['name'] . " + " . $_POST['color']; } ?> <html> <form name="form1" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <label for="name">Name:</label> <input type="text" name="name" size="30" value="<?php echo (isset($_POST['name'])) ? $_POST['name'] : ""; ?>"> <br /> <label for="color">Color:</label> <select name="color"> <option <?php echo (isset($_POST['color']) || $color == "red") ? 'selected="selected"' : ''; ?> value="red">red</option> <option <?php echo (isset($_POST['color']) || $color == "blue") ? 'selected="selected"' : ''; ?> value="blue">blue</option> <option <?php echo (isset($_POST['color']) || $color == "green") ? 'selected="selected"' : ''; ?> value="green">green</option> </select> <br /> <input type="submit" name="Submit" value="Update"> </form> </html>

    Read the article

  • Nesting, grouping Sqlite syntax?

    - by Linda
    I can't for the life of me figure out this Sqlite syntax. Our database contains records like: TX, Austin OH, Columbus OH, Columbus TX, Austin OH, Cleveland OH, Dayton OH, Columbus TX, Dallas TX, Houston TX, Austin (State-field and a city-field.) I need output like this: OH: Columbus, Cleveland, Dayton TX: Dallas, Houston, Austin (Each state listed once... and all the cities in that state.) What would the SELECT statement(s) look like?

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >