Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 700/1278 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Voting Script, Possiblity of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience

    Read the article

  • custom sorting or ordering a table without resorting the whole shebang

    - by fuugus
    for ten years we've been using the same custom sorting on our tables, i'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and would'nt like to have our replication replicate unnecessary entries. i had a look into nested sets, but it does'nt seem to do the job for us. base table: id | a_sort ---+------- 1 10 2 20 3 30 after inserting insert into table (a_sort) values(15) an entry at the second position. id | a_sort ---+------- 1 10 2 20 3 30 4 15 ordering the table with select * from table order by a_sort and resorting all the a_sort entries, updating at least id=(2,3,4) will of course produce the desired output id | a_sort ---+------- 1 10 4 20 2 30 3 40 the column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem. also we've found some pretty neat ways to do this task fast. only; how the heck can we reduce the updates in the db to 1 or 2 max. seems like an awfully common problem. the captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".. but this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)

    Read the article

  • When running UPDATE ... datetime = NOW(); will all rows updated have the same date/time?

    - by Darryl Hein
    When you run something similar to: UPDATE table SET datetime = NOW(); on a table with 1 000 000 000 records and the query takes 10 seconds to run, will all the rows have the exact same time (minutes and seconds) or will they have different times? In other words, will the time be when the query started or when each row is updated? I'm running MySQL, but I'm thinking this applies to all dbs.

    Read the article

  • Broken count(*) after adding LEFT JOIN

    - by Iain Urquhart
    Since adding the LEFT JOIN to the query below, the count(*) has been returning some strange values, it seems to have added the total rows returned in the query to the 'level': SELECT `n`.*, exp_channel_titles.*, round((`n`.`rgt` - `n`.`lft` - 1) / 2, 0) AS childs, count(*) - 1 + (`n`.`lft` > 1) + 1 AS level, ((min(`p`.`rgt`) - `n`.`rgt` - (`n`.`lft` > 1)) / 2) > 0 AS lower, (((`n`.`lft` - max(`p`.`lft`) > 1))) AS upper FROM `exp_node_tree_6` `n` LEFT JOIN `exp_channel_titles` ON (`n`.`entry_id`=`exp_channel_titles`.`entry_id`), `exp_node_tree_6` `p`, `exp_node_tree_6` WHERE `n`.`lft` BETWEEN `p`.`lft` AND `p`.`rgt` AND ( `p`.`node_id` != `n`.`node_id` OR `n`.`lft` = 1 ) GROUP BY `n`.`node_id` ORDER BY `n`.`lft` I'm totally stumped... Thank you!

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • What is the most "database independent" way of creating a variable length text field in a database

    - by Thibaut Colar
    I want to create a text field in the database, with no specific size (it will store text of length unknown in some case) - the particular text are serialized simple object (~ JSON) What is the most database independent way to do this : - a varchar with no size specified (don't think all db support this) - a 'text' field, this seems to be common, but I don't believe it's a standard - a blob or other object of that kind ? - a varchar of a a very large size (that's inefficient and wastes disk space probably) - Other ? I'm using JDBC, but I'd like to use something that is supported in most DB (oracle, mysql, postgresql, derby, HSQL, H2 etc...) Thanks.

    Read the article

  • Wildcard in query for ASP.NET GridView

    - by cinu
    I am using GridView in ASP.NET 2.0. I am want to show the details from 3 tables (SQL2005) in the GridView per my search crieteria (Name of Visitor,Passport Number,Name of Company). It is working, but I want to use a wildcard for searching by first letter of "Name of Visitor". I have my code in the QueryBuilder in GridView (using Configure Datasource). The query is as follows: SELECT FormMaster.NameofCompany, VisitorMaster.NameofVisitor, VisitorMaster.PassportNumber, FormMaster.FormID, VisitorMaster.VisitorID FROM VisitorMaster INNER JOIN VisitorDetails ON VisitorMaster.VisitorID = VisitorDetails.VisitorID INNER JOIN FormMaster ON VisitorDetails.FormID = FormMaster.FormID WHERE (FormMaster.FormStatusID = 1) AND (VisitorMaster.PassportNumber = @PassportNumber ) OR (VisitorMaster.NameofVisitor = @NameofVisitor) OR (FormMaster.NameofCompany = @NameofCompany )

    Read the article

  • How do I do a table join on two fields in my second table?

    - by Cannonade
    I have two tables: Messages - Amongst other things, has a to_id and a from_id field. People - Has a corresponding person_id I am trying to figure out how to do the following in a single linq query: Give me all messages that have been sent to and from person x (idself). I had a couple of cracks at this. Not quite right MsgPeople = (from p in db.people join m in db.messages on p.person_id equals m.from_id where (m.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); This almost works, except I think it misses one case: "people who have never received a message, just sent one to me" How this works in my head So what I really need is something like: join m in db.messages on (p.people_id equals m.from_id or p.people_id equals m.to_id) Gets me a subset of the people I am after It seems you can't do that. I have tried a few other options, like doing two joins: MsgPeople = (from p in db.people join m in AllMessages on p.person_id equals m.from_id join m2 in AllMessages on p.person_id equals m2.to_id where (m2.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); but this gives me a subset of the results I need, I guess something to do with the order the joins are resolved. My understanding of LINQ (and perhaps even database theory) is embarrassingly superficial and I look forward to having some light shed on my problem.

    Read the article

  • Table in DB for generating primary keys?

    - by Sapphire
    Do you ever use a separate table for "generating" artificial primary keys for DB (and why)? What I mean is to have a table with two columns, table name and current ID - with which you could get new "ID" for some table by simply locking the row with that table name, getting the current value of the key, increment it by one, and unlock the row. Why would you prefer this over standard integer identity column? P.S. The "idea" is from Fowlers Patterns of Enterprise Application Architecture, btw...

    Read the article

  • user_objects oracle

    - by mysticfalls
    I would just like to ask what is the difference between user_constraints and user_objects. I have this two database. I run a script on both DB that resulted a unique constraint error. To solve the problem I delete the constraint on user_constraint table for both DB. After that DB1 run without error.. DB2 however failed, I checked the user_constraint for both db and the constraints was deleted. I was asked to check the user_objects.. and found that DB2 has that same constraint_name as the object_name in user_objects table.. Any info regarding their relationship, use, similarites, etc will be appreciated ... Thanks..

    Read the article

  • What is the current status on Microsoft ProClarity?

    - by Ali_Abadani
    I don't really know how to compose this question. My complay has been using Microsoft ProClarity for few years and we have a quite a few users using it publising books and doing ad-hoc analysis. With the new Microsoft BI solutions, it seems like they are completely going away from ProClarity and replacing the OLAP analysis with Excel. I understand the with SharePoint and integration with PerformancePoint and reporting services the dashboards and reports would be done in SharePoint but what about the analysis? Any ideas?

    Read the article

  • pgSQL query error

    - by running4surival
    i tried using this query: "SELECT * FROM guests WHERE event_id=".$id." GROUP BY member_id;" and I'm getting this error: ERROR: column "guests.id" must appear in the GROUP BY clause or be used in an aggregate function can anyone explain how i can work around this?

    Read the article

  • query to return three records for each customer application based on the options declared in the pre

    - by kumarreddy
    tables look like this table1---customer application columns--- application id--primary key, name, ssn, ... ... table2----balance(actually its a view) columns--- amount balance, application id ...... ...... table3 ---- options columns--- optionid, option value(1,2,3,4), ...... ........ .... table4 ----- ratios columns--- ratios id, option value, ratio value, applicationid(have to think about it), ........ table 4(detail) option value, Ratios 1 ----- 30 1 ----- 40 1 ----- 30 2 ---- 100 2 ----- 0 2 ------ 0 3 ---- 60 3 ------ 30 3 ----- 10 4 ---- 50 4 ----- 30 4 ----- 20 as is the case...now i need to get three records for each customer application with varying balances in proportion of ratios declared in table 4 corresponding to option values...... plz let me know where i was unclear about returning records thanks in advance

    Read the article

  • Crash when checking BOF property of pessimistic locked ADO recordset

    - by Patrick
    Bit of an odd one for you: I've got two connections to a database, on one I've opened a _RecordsetPtr with a pessimistic lock. I can no longer send an UPDATE command on the other connection. I can send a SELECT command on the second connection and data is returned. If I use a read only lock then there are no problems however when I use a pessimistic lock on the second connection as well I can check the State == adStateOpen but the program hangs when I test the BOF property! If I don't test the BOF property and try to call moveNext on the second connection the software hangs If I do neither of these I am able to access the data via the second connection but trying to access the data from the first connection causes the software to hang. Any one seen anything similar as I'm a bit stuck? EDIT : it wasn't hanging, someone had put a 30 minute timeout on the connection and I wasn't waiting that long while testing...

    Read the article

  • SqlCeCommand ExecuteNonQuery performance issue

    - by Michael
    I've been asked to resolve an issue with a .Net/SqlServerCe application. Specifically, after repeated inserts against the db, performance becomes increasingly degraded. In one instance at ~200 rows, in another at ~1000 rows. In the latter case the code being used looks like this: Dim cm1 As System.Data.SqlServerCe.SqlCeCommand = cn1.CreateCommand cm1.CommandText = "INSERT INTO Table1 Values(?,?,?,?,?,?,?,?,?,?,?,?,?)" For j = 0 To ds.Tables(0).Rows.Count - 1 'this is 3110 For i = 0 To 12 cm1.Parameters(tbl(i, 0)).Value = Vals(j,i) 'values taken from a different db Next cm1.ExecuteNonQuery() Next The specifics aren't super important (like what 'tbl' is, etc) but rather whether or not this code should be expected to handle this number of inserts, or if the crawl I'm witnessing is to be expected.

    Read the article

  • select row from table and substitute a field with one from another column if not null

    - by EarthMind
    I'm trying construct a PostgreSQL query that does the following but so far my efforts have been in vain. Problem: There are two tables: A and B. I'd like to select all columns from table A (having columns: id, name, description) and substitute the "A.name" column with the value of the column "B.title" from table B (having columns: id, table_A_id title, langcode) where B.table_A_id is 5 and B.langcode is "nl" (if there are any rows). I've tried using a CASE and COALESCE() but failed due to my inexperience with both concepts. Thanks in advance.

    Read the article

  • select row from table and substitute a field with one from another column if it exists

    - by EarthMind
    I'm trying construct a PostgreSQL query that does the following but so far my efforts have been in vain. Problem: There are two tables: A and B. I'd like to select all columns from table A (having columns: id, name, description) and substitute the "A.name" column with the value of the column "B.title" from table B (having columns: id, table_A_id title, langcode) where B.table_A_id is 5 and B.langcode is "nl" (if there are any rows). My attempts: SELECT A.name, case when exists(select title from B where table_A_id = 5 and langcode= 'nl') then B.title else A.name END FROM A, B WHERE A.id = 5 and B.table_A_id = 5 and B.langcode = 'nl' -- second try: SELECT COALESCE(B.title, A.name) as name from A, B where A.id = 5 and B.table_A_id = 5 and exists(select title from B where table_A_id = 5 and langcode= 'nl') I've tried using a CASE and COALESCE() but failed due to my inexperience with both concepts. Thanks in advance.

    Read the article

  • Can I lock rows in a cursor if the cursor only returns a single count(*) row?

    - by RenderIn
    I would like to restrict users from inserting more than 3 records with color = 'Red' in my FOO table. My intentions are to A) retrieve the current count so that I can determine whether another record is allowed and B) prevent any other processes from inserting any Red records while this one is in process, hence the for update of. I'd like to do something like: cursor cur_cnt is select count(*) cnt from foo where foo.color = 'Red' for update of foo.id; Will this satisfy both my requirements or will it not lock only the rows in the count(*) who had foo.color = 'Red'?

    Read the article

  • MySQL COUNT() total posts within a specific criteria?

    - by newbtophp
    Hey, I've been losing my hair trying to figure out what I'm doing wrong, let me explain abit about my MySQL structure (so you get a better understanding) before I go straight to the question. I have a simple PHP forum and I have a column in both tables (for posts and topics) named 'deleted' if it equals 0 that means its displayed (considered not deleted/exists) or if it equals 1 it hidden (considered deleted/doesn't exist) - bool/lean. Now, the 'specific criteria' I'm on about...I'm wanting to get a total post count within a specific forum using its id (forum_id), ensuring it only counts posts which are not deleted (deleted = 0) and their parent topics are not deleted either (deleted = 0). The column/table names are self explanatory (see my efforts below for them - if needed). I've tried the following (using a 'simple' JOIN): SELECT COUNT(t1.post_id) FROM forum_posts AS t1, forum_topics AS t2 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND t1.topic_id = t2.topic_id AND t2.deleted = 0 LIMIT 1 I've also tried this (using a Subquery): SELECT COUNT(t1.post_id) FROM forum_posts AS t1 WHERE t1.forum_id = '{$forum_id}' AND t1.deleted = 0 AND (SELECT deleted FROM forum_topics WHERE topic_id = t1.topic_id) = 0 LIMIT 1 But both don't comply with the specific criteria. Appreciate all help! :)

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >