Search Results

Search found 1214 results on 49 pages for 'tomaz tsql'.

Page 10/49 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • T-sql Common expression query as subquery

    - by ase69s
    I have the following query: WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders Its concatenates varchar values of diferents rows grouped by an id. But now i want to include it as a subquery and it gives some errors i cant solve. Simplified example of use: select descriptions from ( WITH Orders(Id) AS ( SELECT DISTINCT anfrageid FROM MPHotlineAnfrageAnhang ) SELECT Id, ( SELECT CONVERT(VARCHAR(255),anfragetext) + ' | ' FROM MPHotlineAnfrageAnhang WHERE anfrageid = Id ORDER BY anfrageid, erstelltam FOR XML PATH('') ) AS Descriptions FROM Orders ) as tx where id=100012 Errors (Aproximate translation from spanish): -Incorrect sintaxis near 'WITH'. -Incorrect sintaxis near 'WITH'. If the instruction is a common table expression or a xmlnamespaces clause, the previous instruction must end with semicolon. -Incorrect sintaxis near ')'. What im doing wrong?

    Read the article

  • Validate a string in a table in SQL Server - CLR function or T-SQL

    - by Ashish Gupta
    I need to check If a column value (string) in SQL server table starts with a small letter and can only contain '_', '-', numbers and alphabets. I know I can use a SQL server CLR function for that. However, I am trying to implement that validation using a scalar UDF and could make very little here...I can use 'NOT LIKE', but I am not sure how to make sure I validate the string irrespective of the order of characters or in other words write a pattern in SQL for this. Am I better off using a SQL CLR function? Any help will be appreciated.. Thanks in advance Thank you everyone for their comments. This morning, I chose to go CLR function way. For the purpose of what I was trying to achieve, I created one CLR function which does the validation of an input string and have that called from a SQL UDF and It works well. Just to measure the performance of t-SQL UDF using SQL CLR function vs t- SQL UDF, I created a SQL CLR function which will just check if the input string contains only small letters, it should return true else false and have that called from a UDF (IsLowerCaseCLR). After that I also created a regular t-SQL UDF(IsLowerCaseTSQL) which does the same thing using the 'NOT LIKE'. Then I created a table (Person) with columns Name(varchar) and IsValid(bit) columns and populate that with names to test. Data :- 1000 records with 'Ashish' as value for Name column 1000 records with 'ashish' as value for Name column then I ran the following :- UPDATE Person Set IsValid=1 WHERE dbo.IsLowerCaseTSQL (Name) Above updated 1000 records (with Isvalid=1) and took less than a second. I deleted all the data in the table and repopulated the same with same data. Then updated the same table using Sql CLR UDF (with Isvalid=1) and this took 3 seconds! If update happens for 5000 records, regular UDF takes 0 seconds compared to CLR UDF which takes 16 seconds! I am very less knowledgeable on t-SQL regular expression or I could have tested my actual more complex validation criteria. But I just wanted to know, even I could have written that, would that have been faster than the SQL CLR function considering the example above. Are we using SQL CLR because we can implement we can implement lot richer logic which would have been difficult otherwise If we write in regular SQL. Sorry for this long post. I just want to know from the experts. Please feel free to ask if you could not understand anything here. Thank you again for your time.

    Read the article

  • T-SQL selecting values that match ISNUMERIC and also are within a specified range. (plus Linq-to-sql

    - by Toby
    I am trying to select rows from a table where one of the (NVARCHAR) columns is within a numeric range. SELECT ID, Value FROM Data WHERE ISNUMERIC(Value) = 1 AND CONVERT(FLOAT, Value) < 66.6 Unfortunately as part of the SQL spec the AND clauses don't have to short circuit (and don't on MSSQL Server EE 2008). More info: http://stackoverflow.com/questions/789231/is-the-sql-where-clause-short-circuit-evaluated My next attempt was to try this to see if I could achieve delayed evaluation of the CONVERT SELECT ID, Value FROM Data WHERE (CASE WHEN ISNUMERIC(Value) = 1 THEN CONVERT(FLOAT, Value) < 66.6 ELSE 0 END) but I cannot seem to use a < (or any comparison) with the result of a CONVERT. It fails with the error Incorrect syntax near '<'. I can get away with SELECT ID, CONVERT(FLOAT, Value) AS Value FROM Data WHERE ISNUMERIC(Value) = 1 So the obvious solution is to wrap the whole select statement in another SELECT and WHERE and return the converted values from the inner select and filter in there where of the outer select. Unfortunately this is where my Linq-to-sql problem comes in. I am filtering not only by one range but potentialy by many, or just by the existance of the record (there are some date range selects and comparisons I've left out.) Essentially I would like to be able to generate something like this: SELECT ID, TypeID, Value FROM Data WHERE (TypeID = 4 AND ISNUMERIC(Value) AND CONVERT(Float, Value) < 66.6) OR (TypeID = 8 AND ISNUMERIC(Value) AND CONVERT(Float, Value) > 99) OR (TypeID = 9) (With some other clauses in each of those where options.) This clearly doesn't work if I filter out the non-ISNUMERIC values in an inner select. As I mentioned I am using Linq-to-sql (and PredicateBulider) to build up these queries but unfortunately Datas.Where(x => ISNUMERIC(x.Value) ? Convert.ToDouble(x.Value) < 66.6 : false) Gets converted to this which fails the initial problem. WHERE (ISNUMERIC([t0].[Value]) = 1) AND ((CONVERT(Float,[t0].[Value])) < @p0) My last resort will have to be to outer join against a double select on the same table for each of the comparisons but this isn't really an idea solution. I was wondering if anyone has run into similar issues before?

    Read the article

  • Start position for a reused t- sql cursor?

    - by Span
    I'm working on a stored procedure that uses a cursor on a temporary table (I have read a bit about why cursors are undesirable, but in this situation I believe I still need to use one). In my procedure I need to step through the rows of the table twice. Having declared the cursor, already stepped through the temporary table once and closed the cursor, would the position of the cursor remain at the end of the table when re-opened or does it reposition itself to the initial starting position (ie: before the first row)? Alternatively, to reposition the cursor must I do a 'FETCH FIRST' before stepping through again? Am I right to assume the 'cost' of doing this repositioning and reusing the cursor would be less than deallocating and reallocating the cursor?

    Read the article

  • SQL Split function that handles string with delimeter appearing between text qualifiers?

    - by Ron
    There are several SQL split functions, from loop driven, to using xml commands, and even using a numbers table. I haven't found one that supports text qualifiers. Using the example string below, I would like to split on ",", but not when it appears between double or single quotes. Example data: [email protected], "Sally \"Heat\" Jones" <[email protected]>, "Mark Jones" <[email protected]>, "Stone, Ron" <[email protected]> Should return a table: [email protected] "Sally \"Heat\" Jones" <[email protected]> "Mark Jones" <[email protected]> "Stone, Ron" <[email protected]> I know this is a complex query/function, but any suggestions or any guidance would be mucho appreciated.

    Read the article

  • SQL : where vs. on in join

    - by Erwin
    Perhaps a dumb question, but consider these 2 tables : T1 Store Year 01 2009 02 2009 03 2009 01 2010 02 2010 03 2010 T2 Store 02 Why is this INNER JOIN giving me the results I want (filtering the [year] in the ON clause) : select t1.* from t1 inner join t2 on t1.store = t2.store and t1.[year] = '2009' Store Year 02 2009 And why the LEFT OUTER JOIN include records of year 2010 ? select t1.* from t1 left outer join t2 on t1.store = t2.store and t1.year = '2009' where t2.store is null 01 2009 03 2009 01 2010 02 2010 03 2010 And I have to write the [year] filter in the 'WHERE' clause : select t1.* from t1 left outer join t2 on t1.store = t2.store where t2.store is null and t1.year = '2009' 01 2009 03 2009 Like I said, perhaps a dumb question, but it's bugging me !

    Read the article

  • T/SQL Efficiency and Order of Execution

    - by Kyle Rozendo
    Hi All, In regards to the order of execution of statements in SQL, is there any difference between the following performance wise? SELECT * FROM Persons WHERE UserType = 'Manager' AND LastName IN ('Hansen','Pettersen') And: SELECT * FROM Persons WHERE LastName IN ('Hansen','Pettersen') AND UserType = 'Manager' If there is any difference, is there perhaps a link etc. that you may have where one can learn more about this? Thanks a ton, Kyle

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • Solving Slow Query

    - by Chris
    We are installing a new forum (yaf) for our site. One of the stored procedures is extremely slow - in fact it always times out in the browser. If I run it in MSSMS it takes nearly 10 minutes to complete. Is there a way to find out what part of this query if taking so long? The Query: DECLARE @BoardID int DECLARE @UserID int DECLARE @CategoryID int = null DECLARE @ParentID int = null SET @BoardID = 1 SET @UserID = 2 select a.CategoryID, Category = a.Name, ForumID = b.ForumID, Forum = b.Name, Description, Topics = [dbo].[yaf_forum_topics](b.ForumID), Posts = [dbo].[yaf_forum_posts](b.ForumID), Subforums = [dbo].[yaf_forum_subforums](b.ForumID, @UserID), LastPosted = t.LastPosted, LastMessageID = t.LastMessageID, LastUserID = t.LastUserID, LastUser = IsNull(t.LastUserName,(select Name from [dbo].[yaf_User] x where x.UserID=t.LastUserID)), LastTopicID = t.TopicID, LastTopicName = t.Topic, b.Flags, Viewing = (select count(1) from [dbo].[yaf_Active] x JOIN [dbo].[yaf_User] usr ON x.UserID = usr.UserID where x.ForumID=b.ForumID AND usr.IsActiveExcluded = 0), b.RemoteURL, x.ReadAccess from [dbo].[yaf_Category] a join [dbo].[yaf_Forum] b on b.CategoryID=a.CategoryID join [dbo].[yaf_vaccess] x on x.ForumID=b.ForumID left outer join [dbo].[yaf_Topic] t ON t.TopicID = [dbo].[yaf_forum_lasttopic](b.ForumID,@UserID,b.LastTopicID,b.LastPosted) where a.BoardID = @BoardID and ((b.Flags & 2)=0 or x.ReadAccess<>0) and (@CategoryID is null or a.CategoryID=@CategoryID) and ((@ParentID is null and b.ParentID is null) or b.ParentID=@ParentID) and x.UserID = @UserID order by a.SortOrder, b.SortOrder IO Statistics: Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Active'. Scan count 14, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_User'. Scan count 0, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Topic'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Category'. Scan count 0, logical reads 28, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_Forum'. Scan count 0, logical reads 488, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserGroup'. Scan count 231, logical reads 693, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_ForumAccess'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_AccessMask'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'yaf_UserForum'. Scan count 1, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Client Statistics: Client Execution Time 11:54:01 Query Profile Statistics Number of INSERT, DELETE and UPDATE statements 0 0.0000 Rows affected by INSERT, DELETE, or UPDATE statements 0 0.0000 Number of SELECT statements 8 8.0000 Rows returned by SELECT statements 19 19.0000 Number of transactions 0 0.0000 Network Statistics Number of server roundtrips 3 3.0000 TDS packets sent from client 3 3.0000 TDS packets received from server 34 34.0000 Bytes sent from client 3166 3166.0000 Bytes received from server 128802 128802.0000 Time Statistics Client processing time 156478 156478.0000 Total execution time 572009 572009.0000 Wait time on server replies 415531 415531.0000 Execution Plan

    Read the article

  • SQL Server: Difference between PARTITION BY and GROUP BY

    - by Mike Mooney
    I've been using GROUP BY for all types of aggregate queries over the years. Recently, I've been reverse-engineering some code that uses PARTITION BY to perform aggregations. In reading through all the documentation I can find about PARTITION BY, it sounds a lot like GROUP BY, maybe with a little extra functionality added in? Are they two versions of the same general functionality, or are they something different entirely?

    Read the article

  • FullText Search using multiple tables in SQL

    - by Caesar
    Hi there. I have 3 tables, tblBook(BookID, ISBN, Title, Summary) tblAuthor(AuthorID, FullName) tblBookAuthor(BookAuthorID, BookID, AuthorID) tblBookAuthor allows for a single book to have multiple authors and an author may have written any number of books. I am using full text search to search for ranking base on a word: SET @Word = 'FORMSOF(INFLECTIONAL, "' + @Word + '")' SELECT COALESCE(ISBNResults.[KEY], TitleResults.[KEY], SummaryResults.[KEY]) AS [KEY], ISNULL(ISBNResults.Rank, 0) * 3 + ISNULL(TitleResults.Rank, 0) * 2 + ISNULL(SummaryResults.Rank, 0) AS Rank FROM CONTAINSTABLE(tblBook, ISBN, @Word, LANGUAGE 'English') AS ISBNResults FULL OUTER JOIN CONTAINSTABLE(tblBook, Title, @Word, LANGUAGE 'English') AS TitleResults ON ISBNResults.[KEY] = TitleResults.[KEY] FULL OUTER JOIN CONTAINSTABLE(tblBook, Summary, @Word, LANGUAGE 'English') AS SummaryResults ON ISBNResults.[KEY] = SummaryResults.[KEY] The above code works fine for just searching tblBook table. But now I would like to search also the table tblAuthor based on key word searched. Can you help me with this?

    Read the article

  • How to select the top n from a union of two queries where the resulting order needs to be ranked by individual query?

    - by Jedidja
    Let's say I have a table with usernames: Id | Name ----------- 1 | Bobby 20 | Bob 90 | Bob 100 | Joe-Bob 630 | Bobberino 820 | Bob Junior I want to return a list of n matches on name for 'Bob' where the resulting set first contains exact matches followed by similar matches. I thought something like this might work SELECT TOP 4 a.* FROM ( SELECT * from Usernames WHERE Name = 'Bob' UNION SELECT * from Usernames WHERE Name LIKE '%Bob%' ) AS a but there are two problems: It's an inefficient query since the sub-select could return many rows (looking at the execution plan shows a join happening before top) (Almost) more importantly, the exact match(es) will not appear first in the results since the resulting set appears to be ordered by primary key. I am looking for a query that will return (for TOP 4) Id | Name --------- 20 | Bob 90 | Bob (and then 2 results from the LIKE query, e.g. 1 Bobby and 100 Joe-Bob) Is this possible in a single query?

    Read the article

  • SQL DATEDIFF Not working!?

    - by James
    Hi all, I am running a simple DATEDIFF query but it doesn't seem to calculate the days properly or i'm doing something wrong. If I run PRINT DATEDIFF(Day, 2010-01-20, 2010-01-01) RETURN 19 Which is correct. If i change the month in the first date to Feb (02) I get something strange. PRINT DATEDIFF(Day, 2010-02-20, 2010-01-01) RETURN 20 Now shouldn't it be 48 or something? Can anyone see what i'm doing wrong or is this not the correct function to be using if I want the No of days between these dates? I've tried taking one date from the other: PRINT (2010-02-20) - (2010-01-01) RETURN -20 Any help much appreciated. Thanks J.

    Read the article

  • SQLCMD.EXE generates ugly report. How to format it?

    - by Juri Bogdanov
    I did batch to run SQL query like use [AxDWH_Central_Reporting] GO EXEC sp_spaceused @updateusage = N'TRUE' GO It displays 2 tables and generates some ugly report with some kind of 'P' unneeded letters... See below Changed database context to 'AxDWH_Central_Reporting'. database_name Pdatabase_size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ AxDWH_Central_Reporting P10485.69 MB P7436.85 MB reserved Pdata Pindex_size Punused ------------------P------------------P------------------P------------------ 3121176 KB P3111728 KB P7744 KB P1704 KB ---------------------------------------------------------------- I also tryed to generate 1 table from this procedure with next query declare @dbname sysname, @dbsize bigint, @logsize bigint, @reservedpages bigint select @reservedpages = sum(a.total_pages) from sys.partitions p join sys.allocation_units a on p.partition_id = a.container_id left join sys.internal_tables it on p.object_id = it.object_id select @dbsize = sum(convert(bigint,case when status & 64 = 0 then size else 0 end)), @logsize = sum(convert(bigint,case when status & 64 <> 0 then size else 0 end)) from dbo.sysfiles select 'database name' = db_name(), 'database size' = ltrim(str((convert (dec (15,2),@dbsize) + convert (dec (15,2),@logsize)) * 8192 / 1048576,15,2) + ' MB'), 'unallocated space' = ltrim(str((case when @dbsize >= @reservedpages then (convert (dec (15,2),@dbsize) - convert (dec (15,2),@reservedpages)) * 8192 / 1048576 else 0 end),15,2) + ' MB') But got similar ugly report: database name Pdatabase size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ master P5.75 MB P1.52 MB (1 rows affected) Is it possible to change the layout formatting for report? To make it more beautifull?

    Read the article

  • sonarinstall Sonar to mssql

    - by senzacionale
    My server name in SQL management studio is: MITJAB-NOTEBOOK\SQL2008 i trx now install sonar in this server: sonar.jdbc.url: jdbc:microsoft:sqlserver://MITJAB-NOTEBOOK\SQL2008:1433;DatabaseName=Sonar;SelectMethod=cursor sonar.jdbc.driverClassName: net.sourceforge.jtds.jdbc.Driver sonar.jdbc.validationQuery: select 1 but not working. DB is Sonar and username and passwrod are the same. is url OK?

    Read the article

  • How to do regex HTML tag replace in SQL Server?

    - by timmerk
    I have a table in SQL Server 2005 with hundreds of rows with HTML content. Some of the content has HTML like: <span class=heading-2>Directions</span> where "Directions" changes depending on page name. I need to change all the <span class=heading-2> and </span> tags to <h2> and </h2> tags. I wrote this query to do content changes in the past, but it doesn't work for my current problem because of the ending HTML tag: Update ContentManager Set ContentManager.Content = replace(Cast(ContentManager.Content AS NVARCHAR(Max)), 'old text', 'new text') Does anyone know how I could accomplish the span to h2 replacing purely in T-SQL? Everything I found showed I would have to do CLR integration. Thanks!

    Read the article

  • How to add a default value to an already existing column?

    - by Earlz
    I have an existing column in my SQL Server database. I have tried about everything I can think of but can not get a default value to be added to the column. What works in every other database is alter table mytable alter column mycolumn set default(now()) --mycolumn is a datetime How do I do this in SQL Server? The error I get for that exact syntax is incorrect syntax near the keyword 'set'

    Read the article

  • Semi-complex aggregate select statement confusion

    - by Ian Henry
    Alright, this problem is a little complicated, so bear with me. I have a table full of data. One of the table columns is an EntryDate. There can be multiple entries per day. However, I want to select all rows that are the latest entry on their respective days, and I want to select all the columns of said table. One of the columns is a unique identifier column, but it is not the primary key (I have no idea why it's there; this is a pretty old system). For purposes of demonstration, say the table looks like this: create table ExampleTable ( ID int identity(1,1) not null, PersonID int not null, StoreID int not null, Data1 int not null, Data2 int not null, EntryDate datetime not null ) The primary key is on PersonID and StoreID, which logically defines uniqueness. Now, like I said, I want to select all the rows that are the latest entries on that particular day (for each Person-Store combination). This is pretty easy: --Figure 1 select PersonID, StoreID, max(EntryDate) from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) Where dbo.dayof() is a simple function that strips the time component from a datetime. However, doing this loses the rest of the columns! I can't simply include the other columns, because then I'd have to group by them, which would produce the wrong results (especially since ID is unique). I have found a dirty hack that will do what I want, but there must be a better way -- here's my current solution: select cast(null as int) as ID, PersonID, StoreID, cast(null as int) as Data1, cast(null as int) as Data2, max(EntryDate) as EntryDate into #StagingTable from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) update Target set ID = Source.ID, Data1 = Source.Data1, Data2 = Source.Data2, from #StagingTable as Target inner join ExampleTable as Source on Source.PersonID = Target.PersonID and Source.StoreID = Target.StoreID and Source.EntryDate = Target.EntryDate This gets me the correct data in #StagingTable but, well, look at it! Creating a table with null values, then doing an update to get the values back -- surely there's a better way to do this? A single statement that will get me all the values the first time? It is my belief that the correct join on that original select (Figure 1) would do the trick, like a self-join or something... but how do you do that with the group by clause? I cannot find the right syntax to make the query execute. I am pretty new with SQL, so it's likely that I'm missing something obvious. Any suggestions? (Working in T-SQL, if it makes any difference)

    Read the article

  • Odd SQL Results

    - by Ryan Burnham
    So i have the following query Select id, [First], [Last] , [Business] as contactbusiness, (Case When ([Business] != '' or [Business] is not null) Then [Business] Else 'No Phone Number' END) from contacts The results look like id First Last contactbusiness (No column name) 2 John Smith 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number I'd expect record 2 to also show No Phone Number If i change the "[Business] is not null" to [Business] != null then i get the correct results id First Last contactbusiness (No column name) 2 John Smith No Phone Number 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number Normally you need to use is not null rather than != null. whats going on here?

    Read the article

  • How to avoid using duplicate savepoint names in nested transactions in nested stored procs?

    - by Gary McGill
    I have a pattern that I almost always follow, where if I need to wrap up an operation in a transaction, I do this: BEGIN TRANSACTION SAVE TRANSACTION TX -- Stuff IF @error <> 0 ROLLBACK TRANSACTION TX COMMIT TRANSACTION That's served me well enough in the past, but after years of using this pattern (and copy-pasting the above code), I've suddenly discovered a flaw which comes as a complete shock. Quite often, I'll have a stored procedure calling other stored procedures, all of which use this same pattern. What I've discovered (to my cost) is that because I'm using the same savepoint name everywhere, I can get into a situation where my outer transaction is partially committed - precisely the opposite of the atomicity that I'm trying to achieve. I've put together an example that exhibits the problem. This is a single batch (no nested stored procs), and so it looks a little odd in that you probably wouldn't use the same savepoint name twice in the same batch, but my real-world scenario would be too confusing to post. CREATE TABLE Test (test INTEGER NOT NULL) BEGIN TRAN SAVE TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (1) COMMIT TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (2) COMMIT TRAN TX DELETE FROM Test ROLLBACK TRAN TX COMMIT TRAN TX SELECT * FROM Test DROP TABLE Test When I execute this, it lists one record, with value "1". In other words, even though I rolled back my outer transaction, a record was added to the table. What's happening is that the ROLLBACK TRANSACTION TX at the outer level is rolling back as far as the last SAVE TRANSACTION TX at the inner level. Now that I write this all out, I can see the logic behind it: the server is looking back through the log file, treating it as a linear stream of transactions; it doesn't understand the nesting/hierarchy implied by either the nesting of the transactions (or, in my real-world scenario, by the calls to other stored procedures). So, clearly, I need to start using unique savepoint names instead of blindly using "TX" everywhere. But - and this is where I finally get to the point - is there a way to do this in a copy-pastable way so that I can still use the same code everywhere? Can I auto-generate the savepoint name on the fly somehow? Is there a convention or best-practice for doing this sort of thing? It's not exactly hard to come up with a unique name every time you start a transaction (could base it off the SP name, or somesuch), but I do worry that eventually there would be a conflict - and you wouldn't know about it because rather than causing an error it just silently destroys your data... :-(

    Read the article

  • T SQL Rotate row into columns

    - by cshah
    SQL 2005 using T-SQL, I want to rotate rows into columns. Sample script: Use TempDB Go CREATE TABLE [dbo].[CPPrinter_InkLevels]( [CPPrinter_InkLevels_ID] [int] IDENTITY(1,1) NOT NULL, [CPMeasurementGUID] [uniqueidentifier] NOT NULL, [InkName] [varchar](30) NOT NULL, [InkLevel] [decimal](6, 2) NOT NULL, CONSTRAINT [PK_CPPrinter_InkLevels] PRIMARY KEY CLUSTERED ( [CPPrinter_InkLevels_ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET IDENTITY_INSERT [dbo].[CPPrinter_InkLevels] ON INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (1, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Black', CAST(0.60 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (2, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Cyan', CAST(0.69 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (3, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Magenta', CAST(0.55 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (4, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Yellow', CAST(0.51 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (5, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Light Black', CAST(0.64 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (6, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Light Cyan', CAST(0.43 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (7, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Light Magenta', CAST(0.30 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (8, N'6acc1562-4e02-45ff-b480-9e01fb97fccf', N'Waste Tank', CAST(0.18 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (9, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Black', CAST(0.60 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (10, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Cyan', CAST(0.69 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (11, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Magenta', CAST(0.55 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (12, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Yellow', CAST(0.51 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (13, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Light Black', CAST(0.64 AS Decimal(6, 2))) INSERT [dbo].[CPPrinter_InkLevels] ([CPPrinter_InkLevels_ID], [CPMeasurementGUID], [InkName], [InkLevel]) VALUES (14, N'932348a7-6e2f-4a10-9760-be1ae640c7d7', N'Light Cyan', CAST(0.43 AS Decimal(6, 2))) Go SELECT * FROM [dbo].[CPPrinter_InkLevels] --Desired output CPMeasuremnetGUID, Ink1, Level1, Ink2, Level2, Ink3, Level3....

    Read the article

  • How to join dynamic sql statement in variable with normal statement

    - by Oliver
    I have a quite complicated query which will by built up dynamically and is saved in a variable. As second part i have another normal query and i'd like to make an inner join between these both. To make it a little more easier here is a little example to illustrate my problem. For this little example i used the AdventureWorks database. Some query built up dynamically (Yes, i know here is nothing dynamic here, cause it's just an example.) DECLARE @query AS varchar(max) ; set @query = ' select HumanResources.Employee.EmployeeID ,HumanResources.Employee.LoginID ,HumanResources.Employee.Title ,HumanResources.EmployeeAddress.AddressID from HumanResources.Employee inner join HumanResources.EmployeeAddress on HumanResources.Employee.EmployeeID = HumanResources.EmployeeAddress.EmployeeID ;'; EXEC (@query); The normal query i have select Person.Address.AddressID ,Person.Address.City from Person.Address Maybe what i'd like to have but doesn't work select @query.* ,Addresses.City from @query as Employees inner join ( select Person.Address.AddressID ,Person.Address.City from Person.Address ) as Addresses on Employees.AddressID = Addresses.AddressID

    Read the article

  • Assign table values to multiple variables using a single SELECT statement and CASE?

    - by Darth Continent
    I'm trying to assign values contained in a lookup table to multiple variables by using a single SELECT having multiple CASE statements. The table is a lookup table with two columns like so: [GreekAlphabetastic] SystemID Descriptor -------- ---------- 1 Alpha 2 Beta 3 Epsilon This is my syntax: SELECT @VariableTheFirst = CASE WHEN myField = 'Alpha' THEN tbl.SystemID END, @VariableTheSecond = CASE WHEN myField = 'Beta' THEN tbl.SystemID END, @VariableTheThird = CASE WHEN myField = 'Epsilon' THEN tbl.SystemID END FROM GreekAlphabetastic tbl However, when I check the variables after this statement executes, I expected each to be assigned the appropriate value, but instead only the last has a value assigned. SELECT @VariableTheFirst AS First, @VariableTheSecond AS Second, @VariableTheThird AS Third Results: First Second Third NULL NULL 3 What am I doing wrong?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >