select @[email protected]('*')
for xml raw,type
Above statement will generate following alert:
Msg 6819, Level 16, State 3, Line 2
The FOR XML clause is not allowed in a ASSIGNMENT statement.
i've created a function for convert minutes (smallint) in time (varchar(5))
like 58 - 00:58
set QUOTED_IDENTIFIER ON
GO
Create FUNCTION [dbo].[IntToMinutes]
(
@m smallint
)
RETURNS nvarchar(5)
AS
BEGIN
DECLARE @c nvarchar(5)
SET @c = CAST((@m / 60) as varchar(2)) + ':' + CAST((@m % 60) as varchar(2))
RETURN @c
END
The problem is when there are minutes < 10 in time
like 9
the result of this function is 0:9
i want that the format is 00:09
how can i do that?
I have a table with 600+ columns imported from a csv with special chars % _ - in the column names, is there a way to change the column names to remove these special chars ?
the code can be tsql or tsql
Hi
I am wondering how can do a mass insert and bulk copy at the same time? I have 2 tables that should be affect by the bulk copy as they both depend on each other.
So I want it that if while inserting table 1 a record dies it gets rolled back and table 2 never gets updated. Also if table 1 inserts good and table 2 an update fails table 1 gets rolled back.
Can this be done with bulk copy?
Hi all,
in the data warehouse there's a default language for the measures, and I added a translation for German captions. In a Visual Studio Report Server project, when creating a query with my German OS, the cube and its measures are displayed in German language. When dragging measures to the mdx query windows, the default measure name is used. That's what I want and what I expect, since when writing MDX queries I would like to use the default measure names. But when executing the query, the columns created for each measure is translated to German again. This resuls in having German columns names within my dataset, which I dont want. I'd like to have the english column names.
I already tried to change the connection string to: Data Source=server;Initial Catalog=DataWarehouse;LocaleIdentifier=1033
But that doesn't help, I still see German translations.
Anyone knows how to set a specific translation?
I have the below
Name Date
A 2011-01-01 01:00:00.000
A 2011-02-01 02:00:00.000
A 2011-03-01 03:00:00.000
B 2011-04-01 04:00:00.000
A 2011-05-01 07:00:00.000
The desired output being
Name StartDate EndDate
-------------------------------------------------------------------
A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000
B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000
A 2011-05-01 07:00:00.000 NULL
How to achieve the same using TSQL in Set based approach
DDL is as under
DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME)
INSERT INTO @t VALUES('A', '2011-01-01 01:00:00')
INSERT INTO @t VALUES('A', '2011-01-02 02:00:00')
INSERT INTO @t VALUES('A', '2011-01-03 03:00:00')
INSERT INTO @t VALUES('B', '2011-01-04 04:00:00')
INSERT INTO @t VALUES('A', '2011-01-05 07:00:00')
Select * from @t
hi all,
I have a db with users that have all this record .
I would like to do a query on a data like
CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com
and I need to group all users by the same ou=department.
How can I do the select with the substring to search a department??
My idea for the solution is to create another table that is like this:
---------------------------------------------------
ldapstring | society | site
---------------------------------------------------
"CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com" | societyName1 | societySite1
and my idea is to compare the string with these on the new table with the tag like but how can I take the society and site when the like string occurs?????
Please help me
I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes).
Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field.
Is there a permission setting to allow more than 4k bytes at once?
I would like to create a stored procedure that takes in a string of comma separated values like this "1,2,3,4", and break it apart and use those numbers to run a query on a different table.
so in the same stored procedure it would do something like
select somefield from sometable where somefield = 1
select somefield from sometable where somefield = 2
select somefield from sometable where somefield = 3
select somefield from sometable where somefield = 4
Thanks!
for eg...
SELECT *
FROM ( SELECT RANK() OVER (ORDER BY stud_mark DESC) AS ranking,
stud_id,
stud_name,
stud_mark
FROM tbl_student ) AS foo
WHERE ranking = 10
Here foo is present...actually what it does ?..
I have a simple problem, I think, but I have googled and can't find the solution. I have a cube that has MeasureA, MeasureB and MeasureC. Not all three measures have values for each record, sometimes they can be null, it's depending if it was applicable.
Now for my totals, I need to average but the average must not take nulls into account. Any help will be much appreciated. When I view the measures, the null values show as zeros.
Can someone explain the implications of using "with (nolock)" on queries, when you should/shouldn't use it?
For example, if you have a banking application with high transaction rates and a lot of data in certain tables, in what types of queries would nolock be okay? Are there cases when you should always use it/never use it?
When i query
INFORMATION_SCHEMA.VIEWS it list all views but when i query
INFORMATION_SCHEMA.VIEW_TABLE_USAGE it displays only few views.
How can i rebuild all the views info in INFORMATION_SCHEMA.VIEW_TABLE_USAGE?
I'm trying to write a stored procedure to select employees who have birthdays that are upcoming.
SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays
This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'.
Is there a way to ignore the year portion of date fields and just compare month/days?
This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years.
Here is the working solution that I ended up creating, thanks Kogus.
SELECT * FROM Employees
WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int)
- Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int)
<> 0
I have a database project that goes through iterations (only one so far) and I need to deploy a testing version to a live server. I'm not sure how to go about this.
I can make all the changes in a copy and then remake those changes in the live version. That doesn't make sense.
Is there a way to change a server name to an existing server? What's the best practice for this scenario?
I am trying to get recursive data.
Following code returns all parents on the top and then the children.
I would like to get data Parent 1 – his children then parent 2 - his children then parent3 – his children.
How do I do this?
USE Subscriber
GO
WITH Parent (ParentId, Id, Name,subscriberID)
AS
(
-- Anchor member definition
SELECT A.ParentId,A.id, A.name,A.SubscriberId
FROM Subscriber.Budget.SubscriberCategory AS A
WHERE ParentId IS NULL
UNION ALL
-- Recursive member definition
SELECT B.ParentId, B.id, B.name,B.SubscriberId
FROM Subscriber.Budget.SubscriberCategory AS B
INNER JOIN Parent AS P
ON B.ParentId = P.Id
)
-- Statement that executes the CTE
SELECT parentId, id, name
FROM Parent
where subscriberID = '1C18093B-5031-42E4-9251-CEF69114365F'
GO
I have a table like this
foo(id, parentId)
-- there is a FK constraint from parentId to id
and I need to delete an item with all his children and children of the children etc.
anybody knows how ?
Hi, I am trying to get some percentage data from a stored procedure using code similar to the line below. Obviously this is going to cause a (Divide by zero error encountered) problem when base.[XXX_DataSetB] returns 0 which may happen some of the time.
Does anyone know how i can deal with this in the most efficient manner?
Note: There would be about 20+ lines looking like the one below...
cast((base.[XXX_DataSetB] - base.[XXX_DataSetA]) as decimal) / base.[XXX_DataSetB] as [XXX_Percentage]
I've banging my head for hours, it seems simple enough, but here goes:
I'd like to create a view using multiple select statements that outputs a Single record-set
Example:
CREATE VIEW dbo.TestDB
AS
SELECT X AS 'First'
FROM The_Table
WHERE The_Value = 'y'
SELECT X AS 'Second'
FROM The_Table
WHERE The_Value = 'z'
i wanted to output the following recordset:
Column_1 | Column_2
'First' 'Second'
any help would be greatly appreciated!
-Thanks.
I've got a record set that consists of a start and end time in two separate fields:
id - Int
startTime - DateTime
endTime - DateTime
I'd like to find a way to query a record and return it as X records based on the number of 15 minute intervals found between the start and end times.
For example, let's say I have a record like this:
id, StartTime, EndTime
1, 1/1/2010 8:28 AM, 1/1/2010 8:47 AM
I would return 3 records, the first would represent the 8:15 interval, #2 for the 8:30 interval and then a 3rd for the 8:45 interval.
I realize this could be done using logic in an sproc, but we are trying to remain db neutral as we support multiple database engines.
I need to find the amount of updated rows
UPDATE Table SET value=2 WHERE value2=1
declare @aaa int
set @aaa = @@ROWCOUNT
It doesn't work. How can I do that?
Hi guys,
I have an issue with a query I'm trying to run on some data, I guess the place to start is to describe the data.
Ok so I have a list of email addresses, each email address has a unique ID and an account ID
Also in my tables I have a set number which auto incrememnts, this will allow me to target duplicate email addresses
What I need to do is something like this.
Insert into duplicates
(EMAIL,ACCOUNTID,ID)
SELECT Email,AccountID,ID
FROM EmailAddresses
Group by Email,AccountID
Having Count(email)>1
Order by AccountID, Email
So essentially I want to select all duplicate email addresses and insert them (and their relative fields) into a new table broken down by accountID so I can run some further querys on it.
I have been battling with this for way too long and could just use a fresh perspective.
Cheers in advance