Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 160/1448 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Better way to write this SQL

    - by AngryHacker
    I have the following table: create table ARDebitDetail(ID_ARDebitDetail int identity, ID_Hearing int, ID_AdvancedRatePlan int) I am trying to get the latest ID_AdvancedRatePlan based on a ID_Hearing. By latest I mean with the largest ID_ARDebitDetail. I have this query and it works fine. select ID_AdvancedRatePlan from ARDebitDetails where ID_Hearing = 135878 and ID_ARDebitDetail = ( select max(ID_ARDebitDetail) from ARDebitDetails where ID_AdvancedRatePlan > 0 and ID_Hearing = 135878 ) However, it just looks ugly and smells bad. Is there a way to rewrite it in a more concise manner?

    Read the article

  • Deduce non-type template parameter

    - by pezcode
    Is it possible to deduce a non-type template parameter from a template function parameter? Consider this simple template: template <int N> constexpr int factorial() { return N * factorial<N - 1>(); } template <> constexpr int factorial<0>() { return 1; } template <> constexpr int factorial<1>() { return 1; } I would like to be able to change factorial so that I can alternatively call it like this: factorial(5); and let the compiler figure out the value of N at compile time. Is this possible? Maybe with some fancy C++11 addition?

    Read the article

  • SQL Get UID when Group by

    - by Quandary
    I do a select from table [V_RPT_BelegungKostenstelleDetail] WHERE SO_UID = '7C7035C8-56DD-4A44-93CC-F16FD66280A3' AND GB_UID = '4FF1B0EE-A5DD-4699-94B7-760922666CE2' AND GS_UID = '1188759A-54E1-4323-8BF2-85E71B3C796E' AND RM_UID = '088C3559-6E6E-468A-9554-6740840FCBA1' AND NA_UID= '96A2A8DB-8C83-4C60-9060-F0F55719AF5C' GROUP BY KST_UID How can I get SO_UID? It is the same everywhere, but I get an error when I try to get SO_UID with the return values... SO_UID is not necessarely given like '7C7035C8-56DD-4A44-93CC-F16FD66280A3' here, so I can't just add it manually as string.

    Read the article

  • SQL Server: How do I delimit this data?

    - by codingguy3000
    declare @mydata nvarchar(4000) set @mydata = '36|0, 77|5, 132|61' I have this data that I need to get into a table. So for Row1 columnA would be 36 and columnB would be 0. For Row2 columnA would be 77 and columnB would be 5 etc. What is the best way to do this? Thanks

    Read the article

  • Getting Null value Of variable in sql server

    - by Neo
    Strange situation In a trigger i assign a column value to variable but gives exception while inserting into other table using that variable. e.g select @srNO=A.SrNo from A where id=123; insert into B (SRNO) values (@srNo) // here it gives null I run above select query in query pane it works fine but in trigger it gives me null any suggestions

    Read the article

  • SQL SERVER 2008 Dynamic query problem

    - by priyanka.sarkar
    I have a dynamic query which reads like this Alter PROCEDURE dbo.mySP -- Add the parameters for the stored procedure here ( @DBName varchar(50), @tblName varchar(50) ) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here declare @string as varchar(50) declare @string1 as varchar(50) set @string1 = '[' + @DBName + ']' + '.[dbo].' + '[' + @tblName + ']' set @string = 'select * from ' + @string1 exec @string END I am calling like this dbo.mySP 'dbtest1','tblTest' And I am experiencing an error "Msg 203, Level 16, State 2, Procedure mySP, Line 27 The name 'select * from [dbtest1].[dbo].[tblTest]' is not a valid identifier." What is wrong? and How to overcome? Thanks in advance

    Read the article

  • UNIQUE Constraints in SQL (MS-SQL)

    - by rockbala
    Why are UNIQUE Constraints needed in database ? Can you provide any examples ? Primary Key is UNIQUE by default... Understandable as they are referred in other tables as Foreign keys... relation is needed to connect them for rdbms platform... but why would one refer to other columns as UNIQUE, what is benefit of doing so ?)

    Read the article

  • SQL Server combining 2 rows into 1 from the same table

    - by Maton
    Hi, I have a table with an JobID (PK), EmployeeID (FK), StartDate, EndDate containing data such as: 1, 10, '01-Jan-2010 08:00:00', '01-Jan-2010 08:30:00' 2, 10, '01-Jan-2010 08:50:00', '01-Jan-2010 09:05:00' 3, 10, '02-Feb-2010 10:00:00', '02-Feb-2010 10:30:00' I want to return a record for each EndDate for a Job and then the same employees StartDate for his next immediate job (by date time). So from the data above the result would be Result 1: 10, 01-Jan-2010 08:30:00, 01-Jan-2010 08:50:00 Result 2: 10, 01-Jan-2010 09:05:00, 02-Feb-2010 10:00:00 Greatly appreciate any help!

    Read the article

  • SQL Operators as text in where clause

    - by suggy1982
    I have the following table, which is used for storing bandings. The table is maintained via a web frontend. CREATE TABLE [dbo].[Banding]( [BandingID] [int] IDENTITY(1,1) NOT NULL, [ValueLowerLimitOperator] [varchar](10) NULL, [ValueLowerLimit] [decimal](9, 2) NULL, [ValueUpperLimitOperator] [varchar](10) NULL, [ValueUpperLimit] [decimal](9, 2) NULL, [VolumeLowerLimitOperator] [varchar](10) NULL The operator fields store values such as < = <=. I want to get to a position where I can use the operators values stored in the table in a case statement in a where clause. Like this. SELECT * FROM table WHERE CASE ValueLowerLimitOperator WHEN '<' THEN VALUE < X WHEN '>' THEN VALUE > X END rather than having to write mutiple case or if statements for each permutation. Does anyone have any suggestions how I can decode the operators values stored in the table as part of my query and then use them in a case/where statement?

    Read the article

  • How are SQL Server CALs counted?

    - by Sam
    Running a SQL Server, as far as I understand it, you need one CAL for every user who connects to the database server. But what happens if the only computer which is accessing the SQL Server is the server running your business layer? If, for example, you got 1 SQL Server and 1 Business logic server, and 100 Clients who all just query and use the business logic server. No client is using the SQL Server directly, no one is even allowed to contact it. So, since there is only one computer using the SQL server, do I need only 1 CAL??? I somehow can't believe this would count as only 1 CAL needed for the SQL Server, but I would like to know why not.

    Read the article

  • SQL Query to find duplicates not returning any results

    - by TheDudeAbides
    I know there are duplicate account numbers in this table, but this query returns no results. SELECT [CARD NUMBER],[CUSTOMER NAME],[ACCT NBR 1],[ACCT NBR 2], COUNT([ACCT NBR 1]) AS NumOccurences FROM DebitCardData.dbo.['ATM Checking Accts - Active$'] GROUP BY [CARD NUMBER],[CUSTOMER NAME],[ACCT NBR 1],[ACCT NBR 2] HAVING (COUNT([ACCT NBR 1])1)

    Read the article

  • Where to put conditionals in ANSI-syntax SQL queries

    - by RenderIn
    What's the difference between these two queries? I've been resistant to jumping on the ANSI-syntax bandwagon because I have not been able to unravel various syntactical ambiguities. Is 1) returning the product of the join and only then filtering out those joined records which have weight = 500? And is 2) filtering out those prior to the join? Is 2 bad syntax? Why might I use that? 1: SELECT SOMETHING FROM FOO INNER JOIN BAR ON FOO.NAME = BAR.NAME WHERE BAR.WEIGHT < 500 2: SELECT SOMETHING FROM FOO INNER JOIN BAR ON FOO.NAME = BAR.NAME AND BAR.WEIGHT < 500

    Read the article

  • Remove and Replace multiple chars ( spaces, hyphen, brackets, period) from string in sql

    - by Muhammad Kashif Nadeem
    +39 235 6595750 19874624611 +44 (0)181 446 5697 +431 6078115-2730 +1 617 358 5128 +48.40.23755432 +44 1691 872 410 07825 893217 0138 988 1649 (415) 706 2001 00 44 (0) 20 7660 4650 (765) 959-1504 07731 508 486 please reply by email dont have one +447769146971 Please see the above given phone numbers. I need to replace all spaces, hyphen, period, brackets and leading 0 etc from these numbers. I need this format +447469186974 If number has leading plus sign then don't replace it otherwise I have to concatenate + sign with it. E.G +39 235 6595750 in this number I just need to remove spaces. +44 (0)181 446 5697 in this i need to removes spaces and brackets and 0 in between brackets i.e (0) 07825 893217 in this I need to replace leading 0 with + sign and remove spaces (415) 706 2001 in this replace '(' with + sign and remove ')' and spaces. 'please reply by email' This is the entry in phone number field and I just need to ignore this. +48.40.23755432 Remove period in phone number (765) 959-1504 Remove brackets and spaces and hyphen and add + sign in front of number. 7798724250 just need to add + sign in front of number 00 44 (0) 20 7660-4650 Need to remove leading 0 I.E '00' remove spaces and brackets and 0 in between brackets and hyphen and add + sign in front of number Only leading '0' will be replaced not anyother occourence of '0' The desired result is +447769146971 Should I use nested REPLACE, CHARINDES, PATINDES for each char I want to replace? Thanks.

    Read the article

  • UNIQUE Constraints in SQL (SQL Server)

    - by rockbala
    Why are UNIQUE Constraints needed in database ? Can you provide any examples ? Primary Key is UNIQUE by default... Understandable as they are referred in other tables as Foreign keys... relation is needed to connect them for rdbms platform... but why would one refer to other columns as UNIQUE, what is benefit of doing so ?)

    Read the article

  • SQL Query Help - Return row in table which relates to another table row with max(column)

    - by Seth
    I have two tables: Table1 = Schools Columns: id(PK), state(nvchar(100)), schoolname Table2 = Grades Columns: id(PK), id_schools(FK), Year, Reading, Writing... I would like to develop a query to find the schoolname which has the highest grade for Reading. So far I have the following and need help to fill in the blanks: SELECT Schools.schoolname, Grades.Reading FROM Schools, Grades WHERE Schools.id = (* need id_schools for max(Grades.Reading)*)

    Read the article

  • Efficient paging with large tables in sql 2008

    - by Kumar
    for tables with 1,000,000 rows and possibly many many more ! haven't done any benchmarking myself so wanted to get the experts opinion. Looked at some articles on row_number() but it seems to have performance implications What are the other choices/alternatives ?

    Read the article

  • SQL Query Help Part 2 - Add filter to joined tables and get max value from filter

    - by Seth
    I asked this question on SO. However, I wish to extend it further. I would like to find the max value of the 'Reading' column only where the 'state' is of value 'XX' for example. So if I join the two tables, how do I get the row with max(Reading) value from the result set. Eg. SELECT s.*, g1.* FROM Schools AS s JOIN Grades AS g1 ON g1.id_schools = s.id WHERE s.state = 'SA' // how do I get row with max(Reading) column from this result set The table details are: Table1 = Schools Columns: id(PK), state(nvchar(100)), schoolname Table2 = Grades Columns: id(PK), id_schools(FK), Year, Reading, Writing...

    Read the article

  • Optimal way to convert to date

    - by IMHO
    I have legacy system where all date fields are maintained in YMD format. Example: 20101123 this is date: 11/23/2010 I'm looking for most optimal way to convert from number to date field. Here is what I came up with: declare @ymd int set @ymd = 20101122 select @ymd, convert(datetime, cast(@ymd as varchar(100)), 112) This is pretty good solution but I'm wandering if someone has better way doing it

    Read the article

  • SQL Server 2005: When copy table structure to other database "CONSTRAINT" keywords lost

    - by StreamT
    Snippet of original table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL CONSTRAINT [DF_Batch_CustomerDepositMade] DEFAULT (0) Snippet of copied table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL, Code for copy database: Server server = new Server(SourceSQLServer); Database database = server.Databases[SourceDatabase]; Transfer transfer = new Transfer(database); transfer.CopyAllObjects = true; transfer.CopySchema = true; transfer.CopyData = false; transfer.DropDestinationObjectsFirst = true; transfer.DestinationServer = DestinationSQLServer; transfer.CreateTargetDatabase = true; Database ddatabase = new Database(server, DestinationDatabase); ddatabase.Create(); transfer.DestinationDatabase = DestinationDatabase; transfer.Options.IncludeIfNotExists = true; transfer.TransferData();

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • SQL Queries SELECT IN and SELECT NOT IN

    - by Sequenzia
    Does anyone know why the results of the following 2 queries do not add up to the results of the 3rd one? SELECT COUNT(leadID) FROM leads WHERE makeID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) AND modelID NOT IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads WHERE makeID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 1) OR modelID IN (SELECT uploadDataMapID FROM DG_App.dbo.uploadData WHERE uploadID = 3 AND uploadRowID = 2) SELECT COUNT(leadID) FROM Leads The first query is the count I need. The second one is to tell the user how many records were suppressed based on the contents of the DG_App.dbo.uploadData table. The third query is just a straight count of all the records. When I run these the results of query 1 + the results of query 2 comes up about 46K records less than the count of the entire table. I have played with grouping the WHERE statements with () but that did not change the counts at all. This is MSSQL Server 2012. Any input on this would be great. Thanks

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >