Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 663/1156 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • Getting counts of 0 from a query with a double group by

    - by Maltiriel
    I'm trying to write a query that gets the counts for a table (call it item) categorized by two different things, call them type and code. What I'm hoping for as output is the following: Type Code Count 1 A 3 1 B 0 1 C 10 2 A 0 2 B 13 2 C 2 And so forth. Both type and code are found in lookup tables, and each item can have just one type but more than one code, so there's also a pivot (aka junction or join) table for the codes. I have a query that can get this result: Type Code Count 1 A 3 1 C 10 2 B 13 2 C 2 and it looks like (with join conditions omitted): SELECT typelookup.name, codelookup.name, COUNT(item.id) FROM typelookup LEFT OUTER JOIN item JOIN itemcodepivot RIGHT OUTER JOIN codelookup GROUP BY typelookup.name, codelookup.name Is there any way to alter this query to get the results I'm looking for? This is in MySQL, if that matters. I'm not actually sure this is possible all in one query, but if it is I'd really like to know how. Thanks for any ideas.

    Read the article

  • jpa join query on a subclass

    - by Brian
    I have the following relationships in JPA (hibernate). Object X has two subclasses, Y and Z. Object A has a manyToOne relationship to object X. (Note, this is a one-sided relationship so object X cannot see object A). Now, I want to get the max value of a column in object A, but only where the relationship is of a specific subtype, ie...Y. So, that equates to...get the max value of column1 in object A, across all instances of A where they have a relationship with Y. Is this possible? I'm a bit lost as how to query it. I was thinking of something like: String query = "SELECT MAX(a.columnName) FROM A a join a.x; Query query = super.entityManager.createQuery(query); query.execute(); However that doesn't take account of the subclass of X...so I'm a bit lost. Any help would be much appreciated.

    Read the article

  • Why is SQLite3 using covering indices instead of the indices I created?

    - by Geoff
    I have an extremely large database (contacts has ~3 billion entries, people has ~280 million entries, and the other tables have a negligible number of entries). Most other queries I've run are really fast. However, I've encountered a more complicated query that's really slow. I'm wondering if there's any way to speed this up. First of all, here is my schema: CREATE TABLE activities (id INTEGER PRIMARY KEY, name TEXT NOT NULL); CREATE TABLE contacts ( id INTEGER PRIMARY KEY, person1_id INTEGER NOT NULL, person2_id INTEGER NOT NULL, duration REAL NOT NULL, -- hours activity_id INTEGER NOT NULL -- FOREIGN_KEY(person1_id) REFERENCES people(id), -- FOREIGN_KEY(person2_id) REFERENCES people(id) ); CREATE TABLE people ( id INTEGER PRIMARY KEY, state_id INTEGER NOT NULL, county_id INTEGER NOT NULL, age INTEGER NOT NULL, gender TEXT NOT NULL, -- M or F income INTEGER NOT NULL -- FOREIGN_KEY(state_id) REFERENCES states(id) ); CREATE TABLE states ( id INTEGER PRIMARY KEY, name TEXT NOT NULL, abbreviation TEXT NOT NULL ); CREATE INDEX activities_name_index on activities(name); CREATE INDEX contacts_activity_id_index on contacts(activity_id); CREATE INDEX contacts_duration_index on contacts(duration); CREATE INDEX contacts_person1_id_index on contacts(person1_id); CREATE INDEX contacts_person2_id_index on contacts(person2_id); CREATE INDEX people_age_index on people(age); CREATE INDEX people_county_id_index on people(county_id); CREATE INDEX people_gender_index on people(gender); CREATE INDEX people_income_index on people(income); CREATE INDEX people_state_id_index on people(state_id); CREATE INDEX states_abbreviation_index on states(abbreviation); CREATE INDEX states_name_index on states(name); Note that I've created an index on every column in the database. I don't care about the size of the database; speed is all I care about. Here's an example of a query that, as expected, runs almost instantly: SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; Here's the troublesome query: SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); Now, what I think would happen is that each subquery would use each relevant index I've created to speed this up. However, when I show the query plan, I see this: sqlite> EXPLAIN QUERY PLAN SELECT * FROM contacts WHERE rowid IN (SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person1_id=people.id AND people.state_id=states.id AND states.name='Kansas' INTERSECT SELECT contacts.rowid FROM contacts, people, states WHERE contacts.person2_id=people.id AND people.state_id=states.id AND states.name='Missouri'); 0|0|0|SEARCH TABLE contacts USING INTEGER PRIMARY KEY (rowid=?) (~25 rows) 0|0|0|EXECUTE LIST SUBQUERY 1 2|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 2|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 2|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person1_id_index (person1_id=?) (~12 rows) 3|0|2|SEARCH TABLE states USING COVERING INDEX states_name_index (name=?) (~1 rows) 3|1|1|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) 3|2|0|SEARCH TABLE contacts USING COVERING INDEX contacts_person2_id_index (person2_id=?) (~12 rows) 1|0|0|COMPOUND SUBQUERIES 2 AND 3 USING TEMP B-TREE (INTERSECT) In fact, if I show the query plan for the first query I posted, I get this: sqlite> EXPLAIN QUERY PLAN SELECT count(*) FROM people, states WHERE people.state_id=states.id and states.abbreviation='IA'; 0|0|1|SEARCH TABLE states USING COVERING INDEX states_abbreviation_index (abbreviation=?) (~1 rows) 0|1|0|SEARCH TABLE people USING COVERING INDEX people_state_id_index (state_id=?) (~5569556 rows) Why is SQLite using covering indices instead of the indices I created? Shouldn't the search in the people table be able to happen in log(n) time given state_id which in turn is found in log(n) time?

    Read the article

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • How do I do proximity search in Oracle right?

    - by hko19
    Oracle's NEAR operator for full text search returns a score based on the proximity of two or more query terms. For example: near((dog, bite), 6) matches if 'dog' and 'bite' occurs within 6 words. What if I'd like it to match if either 'dog' or 'cat' or any other type of animal occurs within 6 words of the word 'bite'? I tried: near(((dog OR cat OR animal), bite), 6) but I got: NEAR operand not a phrase, equivalence or another NEAR expression Rather than expanding all possible combination into multiple NEAR and 'or' them together, what is the proper way to write such query?

    Read the article

  • kohana quotes in query.

    - by peter
    hey, i want to format a date in mysql using DATE_FORMAT(tblnews.datead, '%M %e, %Y, %l:%i%p') i cant seem to get the quotes right , so i keep getting errors. how would you put this in a query?

    Read the article

  • Double Inner Join generates unexpected error

    - by Itamar Marom
    In my database I have three tables: Users: UserID (Auto Numbering), UserName, UserPassword and a few other unimportant fields. PrivateMessages: MessageID (Auto Numbering), SenderID and a few other fields defining the message content. MessageStatus: MessageID, ReceiverID, MessageWasRead (Boolean) What I need is a query to which I input a user's id and I get all the private messages he has received. In addition, I also need to receive each message's sender UserName. For this I wrote the following query: SELECT Users.*, PrivateMessages.*, MessageStatus.* FROM PrivateMessages INNER JOIN Users ON PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageID WHERE MessageStatus.ReceiverID=[@userid]; But for some reason when I try saving it in my Access database, I get the following error (translated to English by me, since my office is in a different language): Syntax error (missing operator) at expression: "PrivateMessages.SenderID = Users.UserID INNER JOIN MessageStatus ON PrivateMessages.MessageID = MessageStatus.MessageI". Any ideas what could cause this? Thanks.

    Read the article

  • Run SSIS Package from T-SQL

    - by Dr. Zim
    I noticed you can use the following stored procedures (in order) to schedule a SSIS package: msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]' msdb.dbo.sp_add_job ... msdb.dbo.sp_add_jobstep ... msdb.dbo.sp_update_job ... msdb.dbo.sp_add_jobschedule ... msdb.dbo.sp_add_jobserver ... (You can see an example by right clicking a scheduled job and selecting "Script Job as- Create To".) AND you can use sp_start_job to execute the job immediately, effectively running SSIS packages on demand. Question: does anyone know of any msdb.dbo.[...] stored procedures that simply allow you to run SSIS packages on the fly without using sp_cmdshell directly, or some easier approach?

    Read the article

  • Indexed key vs indexed separate columns, which one is faster ?

    - by Jerry
    In MYSQL, from a pure performance perspective, if I have a table with large amount of data with 10/1 read/write ratio. is it faster in read/write performance to have 4 search criteria in separate columns and all indexed or have them combined in to one single string acting as a key and store in one indexed column ? e.g. say this table with 5 columns, first name, last name, sex, country and file where the first four columns will ALWAYS be given as a part of search parameters in a search or have a table with two columns, key and file. where the value of key can be john-smith-male-australia ?? I don't quite get the pros and cons. the point I try to stress is the fact that all parameters will be given.in a search.

    Read the article

  • sqlite3.OperationalError: database is locked - non-threaded application

    - by James C
    Hi, I have a Python application which throws the standard sqlite3.OperationalError: database is locked error. I have looked around the internet and could not find any solution which worked (please note that there is no multiprocesses/threading going on, and as you can see I have tried raising the timeout parameter). The sqlite file is stored on the local hard drive. The following function is one of many which accesses the sqlite database, and runs fine the first time it is called, but throws the above error the second time it is called (it is called as part of a for loop in another function): def update_index(filepath): path = get_setting('Local', 'web') stat = os.stat(filepath) modified = stat.st_mtime index_file = get_setting('Local', 'index') connection = sqlite3.connect(index_file, 30) cursor = connection.cursor() head, tail = os.path.split(filepath) cursor.execute('UPDATE hwlive SET date=? WHERE path=? AND name=?;', (modified, head, tail)) connection.commit() connection.close() Many thanks.

    Read the article

  • Does Table.InsertOnSubmit create a copy of the original table?

    - by Bryan
    Using InsertOnSubmit seems to have some memory overhead. I have a System.Data.Linq.Table<User> table. When I do table.InsertOnSubmit(user) and then int count = table.Count(), the memory usage of my application increases by roughly the size of the User table, but the count is the number of items before user was inserted. So I'm guess an enumeration after InsertOnSubmit will create a copy of the table. Is that true?

    Read the article

  • advanced select in Stored Procedure

    - by Auro
    Hey i got this Table: CREATE TABLE Test_Table ( old_val VARCHAR2(3), new_val VARCHAR2(3), Updflag NUMBER, WorkNo NUMBER ); and this is in my Table: INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('1',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('3',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); now my Table Looks like this: Row Old_val New_val Updflag WorkNo 1 '1' ' 20' 0 0 2 '2' ' 20' 0 0 3 '2' ' 30' 0 0 4 '3' ' 30' 0 0 5 '4' ' 40' 0 0 6 '5' ' 40' 0 0 (if the value in the new_val column are same then they are together and the same goes to old_val) so in the example above row 1-4 are together and row 5-6 at the moment i have in my Stored Procedure a cursor: SELECT t1.Old_val, t1.New_val, t1.updflag, t1.WorkNo FROM Test_Table t1 WHERE t1.New_val = ( SELECT t2.New_val FROM Test_Table t2 WHERE t2.Updflag = 0 AND t2.Worknr = 0 AND ROWNUM = 1 ) the output is this: Row Old_val New_val Updflag WorkNo 1 1 20 0 0 2 2 20 0 0 my Problem is, i dont know how to get row 1 to 4 with one select. (i had an idea with 4 sub-querys but this wont work if its more data that matches together) does anyone of you have an idea?

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Database Design Question: GUID + Natural Numbers

    - by Alan
    For a database I'm building, I've decided to use natural numbers as the primary key. I'm aware of the advantages that GUID's allow, but looking at the data, the bulk of row's data were GUID keys. I want to generate XML records from the database data, and one problem with natural numbers is that I don't want to expose my database key's to the outside world, and allow users to guess "keys." I believe GUID's solve this problem. So, I think the solution is to generate a sparse, unique iD derived from the natural ID (hopefully it would be 2-way), or just add an extra column in the database and store a guid (or some other multibyte id) The derived value is nicer because there is no storage penalty, but it would be easier to reverse and guess compared to a GUID. I'm (buy) curious as to what others on SO have done, and what insights they have.

    Read the article

  • How to store MySQL query results in another Table?

    - by Taz
    How to store results from following query into another table. Considering there is an appropriate table already created. SELECT labels.label,shortabstracts.ShortAbstract,images.LinkToImage,types.Type FROM ner.images,ner.labels,ner.shortabstracts,ner.types WHERE labels.Resource=images.Resource AND labels.Resource=shortabstracts.Resource AND labels.Resource=types.Resource;

    Read the article

  • Does normalization really hurt performance in high traffic sites?

    - by Luke101
    I am designing a database and I would like to normalize the database. I one query I will joining about 30-40 tables. Will this hurt the website performance if it ever becomes extremely popular? This will be the main query and it will be getting called 50% of the time. The other queries I will be joining about 2 tables. I have a choice right now to normalize or not to normalize but if the normalization becomes a problem in the future i may have to rewrite 40% of the software and it may take me a long time. Does normalization really hurt in this case? Should I denormalize now while I have the time?

    Read the article

  • Increase and decrease row value by 1 in MySQL

    - by Elliott
    Hi I have a MySQL database table "points" the user can click a button and a point should be removed from their account, the button they pressed has an ID of another user, therefore their account must increase by one. I have it working in jQuery and checked the varibles/posts in Firebug, and it does send the correct data, such as: userid= 1 posterid = 4 I think the problem is with my PHP page: <?php include ('../functions.php'); $userid=mysql_real_escape_string($_POST['user_id']); $posterid=mysql_real_escape_string($_POST['poster_id']); if (loggedin()) { include ('../connection.php'); $query1 = "UPDATE `points` SET `points` = `points` - 1 WHERE `userID` = '$userid'"; $result1=mysql_query($query1); $query2 = "UPDATE `points` SET `points` = `points` + 1 WHERE `userID` = '$posterid'"; $result2=mysql_query($query2); if ($result1 && result2) { echo "Successful"; return 1; } else { echo mysql_error(); return 0; } } ?> Any ideas? Thanks :)

    Read the article

  • Does anybody have any suggestions on which of these two approaches is better for large delete?

    - by RPS
    Approach #1: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE TOP (@count) FROM ProductOrderInfo WHERE ProductId = @product_id AND bCopied = 1 AND FileNameCRC = @localNameCrc SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' Approach #2: DECLARE @count int SET @count = 2000 DECLARE @rowcount int SET @rowcount = @count WHILE @rowcount = @count BEGIN DELETE FROM ProductOrderInfo WHERE ProductId = @product_id AND FileNameCRC IN ( SELECT TOP(@count) FileNameCRC FROM ProductOrderInfo WITH (NOLOCK) WHERE bCopied = 1 AND FileNameCRC = @localNameCrc ) SELECT @rowcount = @@ROWCOUNT WAITFOR DELAY '000:00:00.400' END

    Read the article

  • how to write this query using joins?

    - by aquero
    Hi, i have a table campaign which has details of campaign mails sent. campaign_table: campaign_id campaign_name flag 1 test1 1 2 test2 1 3 test3 0 another table campaign activity which has details of campaign activities. campaign_activity: campaign_id is_clicked is_opened 1 0 1 1 1 0 2 0 1 2 1 0 I want to get all campaigns with flag value 3 and the number of is_clicked columns with value 1 and number of columns with is_opened value 1 in a single query. ie. campaign_id campaign_name numberofclicks numberofopens 1 test1 1 1 2 test2 1 1 I did this using sub-query with the query: select c.campaign_id,c.campaign_name, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofclicks, (SELECT count(campaign_id) from campaign_activity WHERE campaign_id=c.id AND is_clicked=1) as numberofopens FROM campaign c WHERE c.flag=1 But people say that using sub-queries are not a good coding convention and you have to use join instead of sub-queries. But i don't know how to get the same result using join. I consulted with some of my colleagues and they are saying that its not possible to use join in this situation. Is it possible to get the same result using joins? if yes, please tell me how.

    Read the article

  • Query a recordset

    - by Dion
    I'm trying to work out how to move the $sql_pay_det query outside of the $sql_pay loop (so in effect query on $rs_pay_det rather than creating a new $rs_pay_det for each iteration of the loop) At the moment my code looks like: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'" AND EmployeeRef = "'.$assign_no.'"'; $rs_pay_det = mysql_query($sql_pay_det); $row_pay_det = mysql_fetch_assoc($rs_pay_det); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } I've tried doing this: //Get payroll transactions $sql_pay = 'SELECT Field1, DescriptionandURLlink, Field5, Amount, Field4, PeriodName FROM tblgltransactionspayroll WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND PeriodNo = "'.$month.'"'; $rs_pay = mysql_query($sql_pay); $row_pay = mysql_fetch_assoc($rs_pay); //Get payroll detail recordset $sql_pay_det = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale, EmployeeRef FROM tblpayrolldetail WHERE CostCentreCode = "'.$cc.'" AND SubjectiveCode = "'.$subj.'" AND AccountingPeriod = "'.$mth_name.'"'; $rs_pay_det = mysql_query($sql_pay_det); while($row_pay = mysql_fetch_array($rs_pay)) { $employee_name = $row_pay['Field1']; $assign_no = $row_pay['DescriptionandURLlink']; $pay_period = $row_pay['Field5']; $mth_name = $row_pay['PeriodName']; $amount = $row_pay['Amount']; $total_amount = $total_amount + $amount; $amount = my_number_format($amount, 2, ".", ","); //Query $rs_pay_det $sql_pay_det2 = 'SELECT ElementDesc, Amount, WTEWorked, WTEPaid, WTEContract, Payscale FROM "'.$rs_pay_det.'" WHERE EmployeeRef = "'.$assign_no.'"'; $rs_pay_det2 = mysql_query($sql_pay_det2); $row_pay_det2 = mysql_fetch_assoc($rs_pay_det2); while($row_pay_det = mysql_fetch_array($rs_pay_det)) { $element_det = $row_pay_det['ElementDesc']; $amount_det = $row_pay_det['Amount']; $wte_worked = $row_pay_det['WTEWorked']; $wte_paid = $row_pay_det['WTEPaid']; $wte_cont = $row_pay_det['WTEContract']; $payscale = $row_pay_det['Payscale']; //Get band/point and annual salary where element is basic pay if ($element_det =="3959#Basic Pay"){ $sql_payscale = 'SELECT txtPayscaleName, Salary FROM tblpayscalemapping WHERE txtPayscale = "'.$payscale.'"'; $rs_payscale = mysql_query($sql_payscale); $row_payscale = mysql_fetch_assoc($rs_payscale); $grade = $row_payscale['txtPayscaleName']; $salary = "£" . my_number_format($row_payscale['Salary'], 0, ".", ","); } } } But I get an error on $row_pay_det2 saying that "mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource"

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >