Search Results

Search found 20208 results on 809 pages for 'compiled query'.

Page 475/809 | < Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >

  • Doctrine YAML not generating correctly? Or is this markup wrong?

    - by ropstah
    I'm trying to get a many-to-many relationship between Users and Settings. The models seem to be generated correctly, however the following query fails: "User_Setting" with an alias of "us" in your query does not reference the parent component it is related to. $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u CROSS JOIN Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('s', 'Setting s INDEXBY s.set_auto_key') ->addComponent('us', 'User_Setting us') ->where(u.usr_auto_key = ?',$this->usr_auto_key); $this->settings = $q->execute(); Does anyone spot a problem? This is my YAML: User: connection: default tableName: User columns: usr_auto_key: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true notnull: true email: type: string(100) fixed: false unsigned: false primary: false default: '' notnull: true autoincrement: false password: type: string(32) fixed: false unsigned: false primary: false default: '' notnull: true autoincrement: false relations: Setting: class: Setting foreignAlias: User refClass: User_Setting local: usr_auto_key foreign: set_auto_key Setting: connection: default tableName: Setting columns: set_auto_key: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true notnull: true name: type: string(50) fixed: false unsigned: false primary: false notnull: true autoincrement: false User_Setting: connection: default tableName: User_Setting columns: usr_auto_key: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false notnull: true set_auto_key: type: integer(4) fixed: false unsigned: false primary: true autoincrement: false notnull: true value: type: string(255) fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Setting: foreignAlias: User_Setting local: set_auto_key foreign: set_auto_key User: foreignAlias: User_Setting local: usr_auto_key foreign: usr_auto_key

    Read the article

  • MS SQL - High performance data inserting with stored procedures

    - by Marks
    Hi. Im searching for a very high performant possibility to insert data into a MS SQL database. The data is a (relatively big) construct of objects with relations. For security reasons i want to use stored procedures instead of direct table access. Lets say i have a structure like this: Document MetaData User Device Content ContentItem[0] SubItem[0] SubItem[1] SubItem[2] ContentItem[1] ... ContentItem[2] ... Right now I think of creating one big query, doing somehting like this (Just pseudo-code): EXEC @DeviceID = CreateDevice ...; EXEC @UserID = CreateUser ...; EXEC @DocID = CreateDocument @DeviceID, @UserID, ...; EXEC @ItemID = CreateItem @DocID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... EXEC CreateSubItem @ItemID, ... ... But is this the best solution for performance? If not, what would be better? Split it into more querys? Give all Data to one big stored procedure to reduce size of query? Any other performance clue? I also thought of giving multiple items to one stored procedure, but i dont think its possible to give a non static amount of items to a stored procedure. Since 'INSERT INTO A VALUES (B,C),(C,D),(E,F) is more performant than 3 single inserts i thought i could get some performance here. Thanks for any hints, Marks

    Read the article

  • ORDERBY "human" alphabetical order using SQL string manipulation

    - by supertrue
    I have a table of posts with titles that are in "human" alphabetical order but not in computer alphabetical order. These are in two flavors, numerical and alphabetical: Numerical: Figure 1.9, Figure 1.10, Figure 1.11... Alphabetical: Figure 1A ... Figure 1Z ... Figure 1AA If I orderby title, the result is that 1.10-1.19 come between 1.1 and 1.2, and 1AA-1AZ come between 1A and 1B. But this is not what I want; I want "human" alphabetical order, in which 1.10 comes after 1.9 and 1AA comes after 1Z. I am wondering if there's still a way in SQL to get the order that I want using string manipulation (or something else I haven't thought of). I am not an expert in SQL, so I don't know if this is possible, but if there were a way to do conditional replacement, then it seems I could impose the order I want by doing this: delete the period (which can be done with replace, right?) if the remaining figure number is more than three characters, add a 0 (zero) after the first character. This would seem to give me the outcome I want: 1.9 would become 109, which comes before 110; 1Z would become 10Z, which comes before 1AA. But can it be done in SQL? If so, what would the syntax be? Note that I don't want to modify the data itself—just to output the results of the query in the order described. This is in the context of a Wordpress installation, but I think the question is more suitably an SQL question because various things (such as pagination) depend on the ordering happening at the MySQL query stage, rather than in PHP.

    Read the article

  • Why doesn't a 32bit .deb package install on 64bit Ubuntu?

    - by codebox_rob
    My .deb package, built on 32-bit Ubuntu and containing executables compiled with gcc, won't install on the 64-bit version of the OS (the error message says 'Wrong architecture i386'). This is confusing to me because I thought that in general 32-bit software worked on 64-bit hardware, but not vice-versa. Will it be possible for me to produce a .deb file that I can install on a 64-bit OS, using my 32-bit machine? Is it just a matter of using the appropriate compiler flags to produce the executables (and if so what are they), or is the .deb file itself somehow specific to one processor architecture?

    Read the article

  • New to VS.net (VB.net) 2008. Windows 7 aero glass stuff.

    - by StealthRT
    Hey all, i have been using VB.net 2008 for a few months and i have a question. I compiled my program and ran it in a VM running windows 7. However, the progress bar looks like it does in XP. It doesn't have that cool look to it like I've seen in many other programs running in windows 7. I have downloaded the 3.5 .net framework with sp1 and also the sdk for windows 7 (1.4+ gb dvd) but i still see nothing. Is there a check-box i am missing in VS 2008 to enable these types of features? Maybe some type of code i need to place in the program? Thanks! David

    Read the article

  • git merge specifies wrong author

    - by dhblah
    I have a problem with latest version of git i compiled under cygwin. At first, it displays editor to enter merge message. Previously, it went sailent. And then commit author seems to be different from normal commit. When I do manual commit, author is User Name <[email protected]>, but when I do merge author name is Domain\username <[email protected]> Is there a way to make merge to specify the same auther as for manual commit? What's happening?

    Read the article

  • Counting character count in Access database column ins SQL

    - by jzr
    Good Evening. My problem is possibly very easy, I just have spent some time researching now and probably have a brain lock and unable to solve this, help would be much appreciated. database structure: col1 col2 col3 col4 ==================== 1233+4566+ABCD+CDEF 1233+4566+ACD1+CDEF 1233+4566+D1AF+CDEF I need to count character count in col3, wanted result in from the previous table would be: char count =========== A 3 B 1 C 2 D 3 F 1 1 2 is this possible to achieve by using SQL only? at the moment I am thinking of passing a parameter in to SQL query and count the characters one by one and then sum, however I did not start the VBA part yet, and frankly wouldn't want to do that. this is my query at the moment: PARAMETERS X Long; SELECT First(Mid(TABLE.col3,X,1)) AS [col3 Field], Count(Mid(TABLE.col3,X,1)) AS Dcount FROM TEST GROUP BY Mid(TABLE.col3,X,1) HAVING (((Count(Mid([TABLE].[col3],[X],1)))>=1)); ideas and help are much appreciated, as being said this is probably very for some of your guys, I don't usually work with access and SQL. Thanks.

    Read the article

  • Java Hibernate session delete of object

    - by user2535201
    I'm really struggling with hibernate sessions, I never have the result I expect when making a query on a modified session object. I think all my problems are related. The last one is the following : final Session iSession = AbstractDAO.getSessionFactory().openSession(); try { iSession.beginTransaction(); MyObject iObject = DAOMyObject.getInstance().get(iSession,ObjectId); iObject.setQuantity(0); //previously the quantity was different from zero DAOMyObject.getInstance().update(iSession,iObject); DAOMyObject.getInstance().deleteObjectWithZeroQuantities(iSession); iSession.getTransaction().commit(); } catch (final Exception aException) { iSession.getTransaction().rollback(); logger.error(aException.getMessage(), aException); throw aException; } finally { iSession.close(); } What I'm not getting is why the object is not deleted, since I'm modified it in the session, the query making the delete should find it. I had the same problem with creating an object with an incremental id in a session, then creating another one in the same session before the commit, with a select max(id)+1. But the session gets me the same number of id every time.

    Read the article

  • Apple Mac Software Development

    - by MattMorgs
    I'm planning on developing an Apple Mac application which will collect hardware information from the host Mac and also installed software info. The hardware and software info will be collected in an encrypted XML file and then posted back to a website. The application should run as a "service" or background process on the Mac and can be configured to collect the data on a frequent basis defined by another encrypted XML config file. I've done plenty of Windows based software development but never on the Mac. Can anybody point me in the direction of any useful info on how to develop on the Mac, collect hardware and software info, export to an XML file, file encryption and packaging a compiled app to run as a service? Is either Objective C, Cocoa or Ruby a possible option? Many thanks for your help in advance!

    Read the article

  • dreferencing 2 d array

    - by ashish-sangwan
    Please look at this peice of code :- #include<stdio.h> int main() { int arr[2][2]={1,2,3,4}; printf("%d %u %u",**arr,*arr,arr); return 0; } When i compiled and executed this program i got same value for arr and *arr which is the starting address of the 2 d array. For example:- 1 3214506 3214506 My question is why does dereferencing arr ( *arr ) does not print the value stored at the address contained in arr ?

    Read the article

  • How could my code compliled correctly without necessary headers?

    - by ZhengZhiren
    I use the functions fork(),exec()... But how can this program compiled without including some extra headers(like sys/types.h, sys/wait.h). I use ubuntu 10.04 with gcc version 4.4.3 #include <stdio.h> #include <stdlib.h> int main() { pid_t pid; printf("before fork\n"); pid = fork(); if(pid == 0) { /*child*/ if(execvp("./cpuid", NULL)) { printf("error\n"); exit(0); } } else { if(wait(NULL) != -1) { printf("ok\n"); } } return 0; }

    Read the article

  • Strange profiler behavior: same functions, different performances

    - by arthurprs
    I was learning to use gprof and then i got weird results for this code: int one(int a, int b) { return a / (b + 1); } int two(int a, int b) { return a / (b + 1); } int main() { for (int i = 1; i < 30000000; i++) { two(i, i * 2); one(i, i * 2); } return 0; } and this is the profiler output % cumulative self self total time seconds seconds calls ns/call ns/call name 48.39 0.90 0.90 29999999 30.00 30.00 one(int, int) 40.86 1.66 0.76 29999999 25.33 25.33 two(int, int) 10.75 1.86 0.20 main If i call one then two the result is the inverse, two takes more time than one both are the same functions, but the first calls always take less time then the second Why is that? Note: The assembly code is exactly the same and code is being compiled with no optimizations

    Read the article

  • ShowDialog and Hide form, when from is called from other object in VS2010

    - by Piotr Dabrowski
    Hallo, I have a modal Form, used for searching information in DB. This Form is used by a COM object, that waits for search result. Initialization of the form take a lot of time (because of building connection to DB). So, I initialize the Form (without showing it), and keep Form-object alive, as long as COM-object work. In this way I keep the state of the Form: public void Search() this.ShowDialog(); string result = this.ResultOfSearch; this.Hide() return result; And it doesn't work anymore on VS2010 (compiled for Framework 2.0). I search for alternative way to make a modal form (or a method to protect a form against Destroy() at the end of ShowDialog). Any ideas?

    Read the article

  • Common files in output directories in a C# program

    - by Net Citizen
    My VS2008 solution has the following setup. Program1 Program2 Common.dll (used and referenced by both Program1 and Program2) In debug mode I like to set my output directory to Program Files\Productname, because some code will get the exe path for various reasons. My problem is that Program1 when compiled, will give an error that it could not copy Common.dll if Program2 is started. And vise versa. The annoyance here is that I don't even make changes to Common.dll that often, but 100% of the time it will try to copy it, not only when there are changes. I end up having to close all programs, and then build and then start them. So my question is, how can I only have VS2008 copy the Common.dll if there are changes inside the Common.dll project?

    Read the article

  • Can .NET AppDomains do this?

    - by Eloff
    I've spent hours reading up about AppDomains, but I'm not sure they work quite like I'm hoping. If I have two classes, Foo in AppDomain #1, Bar in AppDomain #2: App Domain #1 is the application. App Domain #2 is something like a plugin, and can be loaded and unloaded dynamically. AppDomain #2 wants to create Foo and use it. Foo uses lots of classes in AppDomain #1 internally. I don't want AppDomain #2 using object foo with reflection, I want it to use Foo foo, with all the static typing and compiled speed that goes with it. Can this be done considering that AppDomain #1, containing Foo, is never unloaded? If so, does any remoting take place here when using Foo? When I unload AppDomain #2, the type Foo is destroyed?

    Read the article

  • How can I use Qt to get html code of this NCBI page??

    - by user308503
    I'm trying to use Qt to download the html code from the following url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=nucleotide&cmd=search&term=AB100362 this url will re-direct to www.ncbi.nlm.nih.gov/nuccore/27884304 I try to do it by following way, but I cannot get anything. it works for some webpage such as www.google.com, but not for this NCBI page. is there any way to get this page?? QNetworkReply::NetworkError downloadURL(const QUrl &url, QByteArray &data) { QNetworkAccessManager manager; QNetworkRequest request(url); QNetworkReply *reply = manager.get(request); QEventLoop loop; QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit())); loop.exec(); if (reply->error() != QNetworkReply::NoError) { return reply->error(); } data = reply->readAll(); delete reply; return QNetworkReply::NoError; } void GetGi() { int pos; QString sGetFromURL = "http://www.ncbi.nlm.nih.gov/entrez/query.fcgi"; QUrl url(sGetFromURL); url.addQueryItem("db", "nucleotide"); url.addQueryItem("cmd", "search"); url.addQueryItem("term", "AB100362"); QByteArray InfoNCBI; int errorCode = downloadURL(url, InfoNCBI); if (errorCode != 0 ) { QMessageBox::about(0,tr("Internet Error "), tr("Internet Error %1: Failed to connect to NCBI.\t\nPlease check your internect connection.").arg(errorCode)); return "ERROR"; } }

    Read the article

  • calculating change (over a period) for a dated field

    - by morpheous
    I have two tables with the following schema: CREATE TABLE sales_data ( sales_time date NOT NULL, product_id integer NOT NULL, sales_amt double NOT NULL ); CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); I want to write two types of queries that will allow me to calculate: period on period change (e.g. week on week change) change in period on period change (e.g. change in week on week change) I would prefer to write this in ANSI SQL, since I dont want to be tied to any particular db. [Edit] In light of some of the comments, if I have to be tied to a single database (in terms of SQL dialect), it will have to be PostgreSQL The queries I want to write are of the form (pseudo SQL of course): Query Type 1 (Period on Period Change) ======================================= a). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) b). select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as month_on_month_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) Query Type 2 (Change in Period on Period Change) ================================================= a). select product_id, ((a2.week_on_week_change - a1.week_on_week_change)/a1.week_on_week_change) as change_on_week_on_week_change from (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a1), (select product_id, ((sd2.sales_amt - sd1.sales_amt)/sd1.sales_amt) as week_on_week_change from sales_data sd1, sales_data sd2, date_dimension dd where {SOME CRITERIA) as a2) WHERE {SOME OTHER CRITERIA}

    Read the article

  • setfirstresult & setmaxresult in child collection

    - by Miguel Marques
    I have and entity lets call it Entity, and a Child collection Children. I have a screen where the user has the Entity information, and a list with the Children collection, but that collection can be get very big, so i was thinking about using paging: get the first 20 elements, and lazy load the next only if the user explicitly presses the next button. So i created in the Entity Repository a function with this signature: IEnumerable<Child> GetChildren(Entity entity, int actualPage, int numberOfRecordsPerPage) I need to use the setfirstresult and setmaxresult, not in the Agregate root Entity, but in the child collection. But when i use those two configurations, they allways refer to the entity type of the HQL/Criteria query. Other alternative would be to create a HQL/Criteria query for the Child type, set the max and first result, then filter the ones who are in the Entity Children collection (by using subquery). But i wasn't able to do this filter. If it was a bidirectional association (Child refering the parent Entity) it would be easier. Any suggestions? Any

    Read the article

  • JDBC with JSP fails to insert

    - by StrykeR
    I am having some issues right now with JDBC in JSP. I am trying to insert username/pass ext into my MySQL DB. I am not getting any error or exception, however nothing is being inserted into my DB either. Below is my code, any help would be greatly appreciated. <% String uname=request.getParameter("userName"); String pword=request.getParameter("passWord"); String fname=request.getParameter("firstName"); String lname=request.getParameter("lastName"); String email=request.getParameter("emailAddress"); %> <% try{ String dbURL = "jdbc:mysql:localhost:3306/assi1"; String user = "root"; String pwd = "password"; String driver = "com.mysql.jdbc.Driver"; String query = "USE Users"+"INSERT INTO User (UserName, UserPass, FirstName, LastName, EmailAddress) " + "VALUES ('"+uname+"','"+pword+"','"+fname+"','"+lname+"','"+email+"')"; Class.forName(driver); Connection conn = DriverManager.getConnection(dbURL, user, pwd); Statement statement = conn.createStatement(); statement.executeUpdate(query); out.println("Data is successfully inserted!"); } catch(SQLException e){ for (Throwable t : e) t.printStackTrace(); } %> DB script here: CREATE DATABASE Users; use Users; CREATE TABLE User ( UserID INT NOT NULL AUTO_INCREMENT, UserName VARCHAR(20), UserPass VARCHAR(20), FirstName VARCHAR(30), LastName VARCHAR(35), EmailAddress VARCHAR(50), PRIMARY KEY (UserID) );

    Read the article

  • trying to redirect the php page is get id is empty ir does not exists.

    - by user570782
    <? include..... if ($picid != $_GET['picid']) || (empty($picid)) { echo "page not working"; } else { $picid = $_GET['picid']; $query = mysql_query("SELECT * FROM pic_info WHERE picid = 'picid1' ");// problem while($rows = mysql_fetch_assoc($query)): $picid = $rows['picid']; $title = $rows['title']; $link = $rows['link']; $description = $rows['description']; $movie_pic = $rows['movie_pic']; $source = $rows['source']; } $get_comment = mysql_query("SELECT * FROM comment WHERE picid ='$picid'");// work partially $comment_count = mysql_num_rows($get_comment); if ($comment_count>0) { messages = " "; while ($com = mysql_fetch_array($get_comment)){ $comment_id = $com['comment_id']; $name = $com['name']; $message = $com['message']; $time_post= $com['time_post']; $messages .= '<em> on ' .$time_post.'</em><b> '.$name.' said.....</b><br/> '.$message.'<hr/>'; // line with problem } } ?> i am stuck i am trying to say that if $_GET['picid']; is empty echo out error message or if the movid does not exist in the db echo out error message. when i run it i get an error. not sure if i am calling the correct function. what am i doing wrong please help

    Read the article

  • Is there any convenient way to check if a DOM element is `$compile`d in AngularJS?

    - by Tong Shen
    I'm trying to integrate some plain JavaScript library with AngularJS, in which I need to manually $compile some DOM elements. I'm doing the compilation like this: $compile(e.srcElement)($scope); e.srcElement is the DOM element I want to $compile. I wonder, if there is any established way to check if a given DOM element has been compiled. I know it's possible if I attach some data attributes to the DOM during compiling, and try to retrieve that later. What I want to know is if there is any existing method in AngularJS. Thank you!

    Read the article

  • deployd authentification using jquery ajax

    - by user2507987
    I have installed deployd in my debian 7.0.0 64 bit, I have also succesfully installed mongodb in it, I have create some collection and user collection in deployd dashboard, then using user guide how to connect and query the table in deployd, I choose jquery ajax to log in to deployd from my localhost site and after login success I try to get/post some data, but somehow deployd return access denied. I have create collection name it people, and then at the GET, POST, PUT Event I have write this code : cancelUnless(me, "You are not logged in", 401); then using this ajax code, I try to login and POST new people data: $(document).ready(function(){ /* Create query for username and password for login */ var request = new Object; request.username = 'myusername'; request.password = 'mypassword'; submitaddress = "http://myipaddress:myport/users/login"; $.ajax({ type: "POST", url: submitaddress, data: request, cache: false, success: function(data){ var returndata = eval(data); /* After Login success try to post people data */ if (returndata){ var request2 = new Object; request2.name = 'People Name'; submitaddress2 = "http://myipaddress:myport/people"; $.ajax({ type: "POST", url: submitaddress2, data: request2, cache: false, success: function(){ } }) } } } }); }) The login process success, it's return session id and my user id, but after login success and I try to POST people data it's return "You are not logged in", can anyone help me, what is the correct way to login to deployd using jquery from other website(cross domain)?

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • How to deploy updates to .NET website in cluster

    - by royappa
    We are operating a corporate web application on a load-balanced cluster that consists of two identical IIS servers talking to a single MSSQL database. To deploy updates I am using this primitive process: 1) Make a copy of the entire site folder (wwwroot\inetpub\whatever) on each IIS box 2) Download the updated, compiled files onto each IIS box from our development area 3) Shut down IIS both web servers 4) Copy the new and updated files into the wwwroot folder (overwriting any same files) 5) Then restart IIS on both machines When there are database changes involved there are a few other steps. The whole process is fairly quick but it is ugly and fraught with danger, so it has to be done with full concentration. I would like to just push one button to make it all happen. And I want a one-click rollback in case there is a problem (that's the reason I make the copy in step #1). I am looking for tools to manage and improve this process. If it also helped us maintain a changelog, that would be nice. Thanks.

    Read the article

  • Problem with DB2 Over clause

    - by silent1mezzo
    I'm trying to do pagination with a very old version of DB2 and the only way I could figure out selecting a range of rows was to use the OVER command. This query provide's the correct results (the results that I want to paginate over). select MIN(REFID) as REFID, REFGROUPID from ARMS_REFERRAL where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc Results: REFID REFGROUPID 302 242 301 241 281 221 261 201 225 142 221 161 ... ... SELECT * FROM ( SELECT row_number() OVER () AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5 Results: REFID REFGROUPID 26 12 22 11 14 8 11 7 6 4 As you can see, it does select the first five rows, but it's obviously not selecting the latest. If I add a Order By clause to the OVER() it gets closer, but still not totally correct. SELECT * FROM ( SELECT row_number() OVER (ORDER BY REFGROUPID desc) AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5 REFID REFGROUPID 302 242 301 241 281 221 261 201 221 161 It's really close but the 5th result isn't correct (actually the 6th result). How do I make this query correct so it can group by a REFGROUPID and then order by the REFID?

    Read the article

< Previous Page | 471 472 473 474 475 476 477 478 479 480 481 482  | Next Page >