Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 762/1289 | < Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >

  • Greatest not null column

    - by Álvaro G. Vicario
    I need to update a row with a formula based on the largest value of two DATETIME columns. I would normally do this: GREATEST(date_one, date_two) However, both columns are allowed to be NULL. I need the greatest date even when the other is NULL (of course, I expect NULL when both are NULL) and GREATEST() returns NULL when one of the columns is NULL. This seems to work: GREATEST(COALESCE(date_one, date_two), COALESCE(date_two, date_one)) But I wonder... am I missing a more straightforward method?

    Read the article

  • sql query is too slow, how to improve speed

    - by user1289282
    I have run into a bottleneck when trying to update one of my tables. The player table has, among other things, id, skill, school, weight. What I am trying to do is: SELECT id, skill FROM player WHERE player.school = (current school of 4500) AND player.weight = (current weight of 14) to find the highest skill of all players returned from the query UPDATE player SET starter = 'TRUE' WHERE id = (highest skill) move to next weight and repeat when all weights have been completed move to next school and start over all schools completed, done I have this code implemented and it works, but I have approximately 4500 schools totaling 172000 players and the way I have it now, it would take probably a half hour or more to complete (did not wait it out), which is way too slow. How to speed this up? Short of reducing the scale of the system, I am willing to do anything that gets the intended result. Thanks! *the weights are the standard folk style wrestling weights ie, 103, 113, 120, 126, 132, 138, 145, 152, 160, 170, 182, 195, 220, 285 pounds

    Read the article

  • Best method to search heriarachal data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (ie Glasgow) or in a larger area (ie Scotland). The two approaches I am considering are 1: keep a note of children in the database for each record (ie cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. 2: Use a recursive function to find all results who have a relation to the selected area The problems I see from both are 1: holding this data in the db will mean having to keep track of all changes to structure 2: Recursion is slow and inefficent Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Search by nvarchar

    - by ziks
    Hi all. I have this problem. In table I have column which is nvarcar type. and row in this column is row1= 1;6 row2 = 12 row3 =6;5;67 etc... I try to search this column. for example when i send 1 i try to get only row1. I use LIKE but in result set I get row1 and row2. How can I achieved this, any help is appreciated. Tnx...

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • Access database query locks ability to edit table?

    - by Sattvic
    I created a query in Microsoft Access like the one below: SELECT Deliverables.ID, Deliverables.Title, Deliverables.Summary, Deliverables.Header_Code, Deliverables.Header_Code.Value, Deliverables.Sort_order, Deliverables.Pillar, Deliverables.Pillar.Value, Deliverables.Misc_ID FROM Deliverables WHERE (((Deliverables.Pillar.Value)="Link Building")); But my problem is that this query locks my fields and I cannot make changes to the table using the query view. Any suggestions? I am using Microsoft Access 2007

    Read the article

  • Insert into select and update in single query

    - by Ossi
    I have 4 tables: tempTBL, linksTBL and categoryTBL, extra on my tempTBL I have: ID, name, url, cat, isinserted columns on my linksTBL I have: ID, name, alias columns on my categoryTBL I have: cl_id, link_id,cat_id on my extraTBL I have: id, link_id, value How do I do a single query to select from tempTBL all items where isinsrted = 0 then insert them to linksTBL and for each record inserted, pickup ID (which is primary) and then insert that ID to categoryTBL with cat_id = 88. after that insert extraTBL ID for link_id and url for value. I know this is so confusing, put I'll post this anyhow... This is what I have so far: INSERT IGNORE INTO linksTBL (link_id,link_name,alias) VALUES(NULL,'tex2','hello'); # generate ID by inserting NULL INSERT INTO categoryTBL (link_id,cat_id) VALUES(LAST_INSERT_ID(),'88'); # use ID in second table I would like to add here somewhere that it only selects items where isinserted = 0 and iserts those records, and onse inserted, will change isinserted to 1, so when next time it runs, it will not add them again.

    Read the article

  • How to limit results by SUM

    - by superspace
    I have a table of events called event. For the purpose of this question it only has one field called date. The following query returns me a number of events that are happening on each date for the next 14 days: SELECT DATE_FORMAT( ev.date, '%Y-%m-%d' ) as short_date, count(*) as date_count FROM event ev WHERE ev.date >= NOW() GROUP BY short_date ORDER BY ev.start_date ASC LIMIT 14 The result could be as follows: +------------+------------+ | short_date | date_count | +------------+------------+ | 2010-03-14 | 1 | | 2010-03-15 | 2 | | 2010-03-16 | 9 | | 2010-03-17 | 8 | | 2010-03-18 | 11 | | 2010-03-19 | 14 | | 2010-03-20 | 13 | | 2010-03-21 | 7 | | 2010-03-22 | 2 | | 2010-03-23 | 3 | | 2010-03-24 | 3 | | 2010-03-25 | 6 | | 2010-03-26 | 23 | | 2010-03-27 | 14 | +------------+------------+ 14 rows in set (0.06 sec) Let's say I want to dislay these events by date. At the same time I only want to display a maximum of 10 at a time. How would I do this? Somehow I need to limit this result by the SUM of the date_count field but I do not know how. Anybody run into this problem before? Any help would be appreciated. Thanks

    Read the article

  • comparing rows on a mysql table

    - by user311324
    Ok here's the deal I got one table with a bunch of client information. Each client makes up to one purchase a year which is represented by an individual row. there's a column for the year and there's a column the contains a unique identifier for each client. What I need to do is to construct a query that takes last year and this year and shows me which clients were here made a purchase last year but not make a purchase this year. I also need to build a query that shows me which clients did not make a purchase last year and the year before last but did make a purchase this year.

    Read the article

  • Merging some columns of two mysql tables where id = fileid

    - by garg
    There are two tables TableA filedata_id | user_id | filename 1 | 1 | file.txt 2 | 1 | file2.txt TableB a_id | date | filedataid | counter | state | cat_id | subcat_id | med_id 99 | 1242144 | 1 | 2 | v | 55 | 56 | 90 100 | 1231232 | 2 | 3 | i | 44 | 55 | 110 I want to move columns cat_id, subcat_id, med_id to TableA where tableA.filedata_id = TableB.filedataid The result should be: TableA filedata_id | user_id | filename | cat_id | subcat_id | med_id 1 | 1 | file.txt | 55 | 56 | 90 2 | 1 | file2.txt | 44 | 55 | 110 and so on. Is there a way to do this easily?

    Read the article

  • Getting rows which include a value with MySQL

    - by sundowatch
    I have a MySQL query which gets including some vars like that: messages TABLE receiver cols user1 rows : 1,3,5 user2 rows : 2,3 user3 rows : 1,4 I want to get rows which includes '3' value. So I will get 'user1' and 'user2'. I tried that but naturally it doesn't work. mysql_query("SELECT * FROM messages WHERE receiver='3'"); How can I do this?

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • How to handle Foreign Keys with Entity Framework

    - by Jack Marchetti
    I have two entities. Groups. Pools. A Group can create many pools. So I setup my Pool table to have a GroupID foreign key. My code: using (entity _db = new entity()) { Pool p = new Pool(); p.Name = "test"; p.Group.ID = "5"; _db.AddToPool(p); } This doesn't work. I get a null reference exception on p.Group. How do I go about creating a new "Pool" and associating a GroupID?

    Read the article

  • PHP, MySQL - would results-array shuffle be quicker than "select... order by rand()"?

    - by sombe
    I've been reading a lot about the disadvantages of using "order by rand" so I don't need update on that. I was thinking, since I only need a limited amount of rows retrieved from the db to be randomized, maybe I should do: $r = $db->query("select * from table limit 500"); for($i;$i<500;$i++) $arr[$i]=mysqli_fetch_assoc($r); shuffle($arr); (i know this only randomizes the 500 first rows, be it). would that be faster than $r = $db->("select * from table order by rand() limit 500"); let me just mention, say the db tables were packed with more than...10,000 rows. why don't you do it yourself?!? - well, i have, but i'm looking for your experienced opinion. thanks!

    Read the article

  • what does select @@identity do?

    - by every_answer_gets_a_point
    i am connecting to a mysql database through excel using odbc what does this line do? Set rs = oConn.Execute("SELECT @@identity", , adCmdText) i am having trouble updating the database: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With it is ONLY updating .Fields("instrument") = "NA", but for all other fields it is putting NULL values

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • MySQL: How to copy rows, but change a few fields?

    - by Andrew
    I have a large number of rows that I would like to copy, but I need to change one field. I can select the rows that I want to copy: select * from Table where Event_ID = "120" Now I want to copy all those rows and create new rows while setting the Event_ID to 155. How can I accomplish this?

    Read the article

  • Is there a way to force Report Builder to use "WITH (NOLOCK)" in the queries it generates?

    - by Joe Pineda
    Hi. At work, users are very happy to generate their own reports using Reporting Services' Report Builder. But, alas, the queries it generates are very inefficient, and they don't use "WITH (NOLOCK)" - slowing down things for everyone. These are reports that really do need to be run using latest data - can't be offloaded to the reporting server. And since they query very specific, detailed data, hypercubes are of no use here. So the question is: Is there a way to configure Report Builder's Data Models so the queries it generates always use "WITH (NOLOCK)" when querying a table?

    Read the article

  • SQL query construction - separate data in a column into two columns

    - by Tommy
    I have a column that contains links. The problem is that the titles of the links are in the same column, so it looks like this: linktitle|-|linkurl I want link title and linkurl in separate columns. I've created a new column for the urls, so I'm looking for a way to extract them and update the linkurl column with them. Is there any clever way to construct a query that does this?

    Read the article

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Performance optimization for mssql: decrease stored procedures execution time or unload the server?

    - by tim
    Hello everybody! We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute. Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service: 1) Try to decrease the stored procedures execution time 2) Try to find the way how to unload the server It is interesting to hear from people who deal with booking sites. Thanks!

    Read the article

< Previous Page | 758 759 760 761 762 763 764 765 766 767 768 769  | Next Page >