I would like to use newId to generate random numbers. Usually you would use it just once, but I might be generating up to 10 random numbers per newId.
Is it random enough?
I have a TimeSheet table as:
CREATE TABLE TimeSheet
(
timeSheetID
employeeID
setDate
timeIn
outToLunch
returnFromLunch
timeOut
);
Employee will set his/her time sheet daily, i want to ensure that he/she doesn't cheat. What should i do?
Should i create a column that gets date/time of the system when insertion/update happens to the table and then compare the created date/time with the time employee's specified - If so in this case i will have to create date/time column for timeIn, outToLunch, returnFromLunch and timeOut. I don't know, what do you suggest?
Note: i'm concerned about tracking these 4 columns timeIn, outToLunch, returnFromLunch and timeOut
Hi:
Subject:
SQLServer 2000 Enterprise Edition, windows XP
Log-in fail for odbc sqlserver user computer name/guest
I am connecting for several computer with the VB6 application with following connection string
"PROVIDER=MSDASQL;driver={SQL Server};server=Computer Name.;uid=;pwd=;database=Test Name;"
BUT, just Three Computer not Log-in. This three (connected with each other) are completely separate in other Room.
I checked all possible options. Besides, which option is missing???
Please Help me...
I have a table with 600+ columns imported from a csv with special chars % _ - in the column names, is there a way to change the column names to remove these special chars ?
the code can be tsql or tsql
I have written the below SP for Precheck for Duplicate records before insert into Table .
but it is not allow me yo write insert staement inside the CASE .
how can I write Stored Procedure for fist Check the value @Ordername into table After that if it is not present then it should inserted into Database .
CREATE PROCEDURE [Test Procedure ]
(
@section varchar(70),
@mark varchar(70),
@qty decimal(18,2),
@Weight decimal(18,2),
@dateupdateremark int,
@OrderName varchar(70)
)
AS
BEGIN
SET NOCOUNT ON;
select case(@OrderName)
when (select OrderName from dbo.tbl_insertxmldetails
where(@OrderName) not in (select OrderName from tbl_insertxmldetails))
then
insert into dbo.tbl_insertxmldetails
(Section, Mark, QTY,Weight,Dateupdateremark ,OrderName,SystemDate)
values
(@Section, @Mark, @QTY,@Weight, @Dateupdateremark,@OrderName,GETDATE())
else 'File already Exists'
end
I have a large historical transaction table (15-20 million rows MANY columns) and a table with one row one column. The table with one row contains a date (last processing date) which will be used to pull the data in the trasaction table ('process_date').
Question: Should I inner join the 'process_date' table to the transaction table or the transaction table to the 'process_date' table?
I have the below
Name Date
A 2011-01-01 01:00:00.000
A 2011-02-01 02:00:00.000
A 2011-03-01 03:00:00.000
B 2011-04-01 04:00:00.000
A 2011-05-01 07:00:00.000
The desired output being
Name StartDate EndDate
-------------------------------------------------------------------
A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000
B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000
A 2011-05-01 07:00:00.000 NULL
How to achieve the same using TSQL in Set based approach
DDL is as under
DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME)
INSERT INTO @t VALUES('A', '2011-01-01 01:00:00')
INSERT INTO @t VALUES('A', '2011-01-02 02:00:00')
INSERT INTO @t VALUES('A', '2011-01-03 03:00:00')
INSERT INTO @t VALUES('B', '2011-01-04 04:00:00')
INSERT INTO @t VALUES('A', '2011-01-05 07:00:00')
Select * from @t
select @[email protected]('*')
for xml raw,type
Above statement will generate following alert:
Msg 6819, Level 16, State 3, Line 2
The FOR XML clause is not allowed in a ASSIGNMENT statement.
Hi
I am wondering how can do a mass insert and bulk copy at the same time? I have 2 tables that should be affect by the bulk copy as they both depend on each other.
So I want it that if while inserting table 1 a record dies it gets rolled back and table 2 never gets updated. Also if table 1 inserts good and table 2 an update fails table 1 gets rolled back.
Can this be done with bulk copy?
hi all,
I have a db with users that have all this record .
I would like to do a query on a data like
CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com
and I need to group all users by the same ou=department.
How can I do the select with the substring to search a department??
My idea for the solution is to create another table that is like this:
---------------------------------------------------
ldapstring | society | site
---------------------------------------------------
"CN=aaa, OU=Domain,OU=User, OU=bbbbbb,OU=Department, OU=cccc, OU=AUTO, DC=dddddd, DC=com" | societyName1 | societySite1
and my idea is to compare the string with these on the new table with the tag like but how can I take the society and site when the like string occurs?????
Please help me
I would like to create a stored procedure that takes in a string of comma separated values like this "1,2,3,4", and break it apart and use those numbers to run a query on a different table.
so in the same stored procedure it would do something like
select somefield from sometable where somefield = 1
select somefield from sometable where somefield = 2
select somefield from sometable where somefield = 3
select somefield from sometable where somefield = 4
Thanks!
for eg...
SELECT *
FROM ( SELECT RANK() OVER (ORDER BY stud_mark DESC) AS ranking,
stud_id,
stud_name,
stud_mark
FROM tbl_student ) AS foo
WHERE ranking = 10
Here foo is present...actually what it does ?..
I've banging my head for hours, it seems simple enough, but here goes:
I'd like to create a view using multiple select statements that outputs a Single record-set
Example:
CREATE VIEW dbo.TestDB
AS
SELECT X AS 'First'
FROM The_Table
WHERE The_Value = 'y'
SELECT X AS 'Second'
FROM The_Table
WHERE The_Value = 'z'
i wanted to output the following recordset:
Column_1 | Column_2
'First' 'Second'
any help would be greatly appreciated!
-Thanks.
I'm trying to write a stored procedure to select employees who have birthdays that are upcoming.
SELECT * FROM Employees WHERE Birthday > @Today AND Birthday < @Today + @NumDays
This will not work because the birth year is part of Birthday, so if my birthday was '09-18-1983' that will not fall between '09-18-2008' and '09-25-2008'.
Is there a way to ignore the year portion of date fields and just compare month/days?
This will be run every monday morning to alert managers of birthdays upcoming, so it possibly will span new years.
Here is the working solution that I ended up creating, thanks Kogus.
SELECT * FROM Employees
WHERE Cast(DATEDIFF(dd, birthdt, getDate()) / 365.25 as int)
- Cast(DATEDIFF(dd, birthdt, futureDate) / 365.25 as int)
<> 0
I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes).
Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field.
Is there a permission setting to allow more than 4k bytes at once?
I have a Cisco 5505 working as a DHCP server, and a server 2008 DNS server running an AD domain.
I am having problems with all XP computers not updating the forward lookup zone. The reverse lookup zone updates are working. Windows vista and 7 computers update just fine. Additionally the DNS server accepts both secure and non-secure updates.
When people are connected through the Cisco's VPN, they cannot resolve to any machines that have reverse lookup zones, but they can resolve entries in the forward lookup zone.
I have tried ipconfig /registerdns, but the forward lookup zone entries for the XP clients are not being populated.
How can I get the XP Dynamic DNS client to make the updates, or what can I do to debug what's going on?
Thanks
I have a simple problem, I think, but I have googled and can't find the solution. I have a cube that has MeasureA, MeasureB and MeasureC. Not all three measures have values for each record, sometimes they can be null, it's depending if it was applicable.
Now for my totals, I need to average but the average must not take nulls into account. Any help will be much appreciated. When I view the measures, the null values show as zeros.
I am trying to get recursive data.
Following code returns all parents on the top and then the children.
I would like to get data Parent 1 – his children then parent 2 - his children then parent3 – his children.
How do I do this?
USE Subscriber
GO
WITH Parent (ParentId, Id, Name,subscriberID)
AS
(
-- Anchor member definition
SELECT A.ParentId,A.id, A.name,A.SubscriberId
FROM Subscriber.Budget.SubscriberCategory AS A
WHERE ParentId IS NULL
UNION ALL
-- Recursive member definition
SELECT B.ParentId, B.id, B.name,B.SubscriberId
FROM Subscriber.Budget.SubscriberCategory AS B
INNER JOIN Parent AS P
ON B.ParentId = P.Id
)
-- Statement that executes the CTE
SELECT parentId, id, name
FROM Parent
where subscriberID = '1C18093B-5031-42E4-9251-CEF69114365F'
GO
When i query
INFORMATION_SCHEMA.VIEWS it list all views but when i query
INFORMATION_SCHEMA.VIEW_TABLE_USAGE it displays only few views.
How can i rebuild all the views info in INFORMATION_SCHEMA.VIEW_TABLE_USAGE?
I have a stored procedure that I'm using to populate a table with about 60 columns. I have genereated 1000 exec statements that look like this:
exec PopulateCVCSTAdvancement 174, 213, 1, 0, 7365
exec PopulateCVCSTAdvancement 174, 214, 1, 0, 7365
exec PopulateCVCSTAdvancement 175, 213, 0, 0, 7365
Each time the stored procedure will be inserting anywhere from 1 to 3,000 records (usually around 2,000 records). The "server" is running desktop hardware with 4 gigs of available memory on a server OS. The problem I have is that after the first 10-15 executes of an average of 1-2 seconds each time, the next 10-15 seem to never finish. Am I doing this correctly? How should I do this?
Thanks!
Top 10 waiters:
LAZYWRITER_SLEEP
SQLTRACE_INCREMENTAL_FLUSH_SLEEP
REQUEST_FOR_DEADLOCK_SEARCH
XE_TIMER_EVENT
FT_IFTS_SCHEDULER_IDLE_WAIT
CHECKPOINT_QUEUE
LOGMGR_QUEUE
SLEEP_TASK
BROKER_TO_FLUSH
BROKER_TASK_STOP
At work we have a number of databases that we need to do the same operations on. I would like to write 1 SP that would loop over operations and set the database at the beginning of the loop (example to follow). I've tried sp_executesql('USE ' + @db_id) but that only sets the DB for the scope of that stored procedure. I don't really want to loop with hard coded database names because we need to do similar things in many different places and it's tough to remember where things need to change if we add another DB.
Any thoughts
Example:
DECLARE zdb_loop CURSOR FAST_FORWARD FOR
SELECT distinct db_id from DBS order by db_id
OPEN zdb_loop
FETCH NEXT FROM zdb_loop INTO @db_id
WHILE @@FETCH_STATUS = 0
BEGIN
USE @db_id
--Do stuff against 3 or 4 different DBs
FETCH NEXT FROM zdb_loop INTO @db_id
END
CLOSE zdb_loop
DEALLOCATE zdb_loop
I need to find the amount of updated rows
UPDATE Table SET value=2 WHERE value2=1
declare @aaa int
set @aaa = @@ROWCOUNT
It doesn't work. How can I do that?
Can someone explain the implications of using "with (nolock)" on queries, when you should/shouldn't use it?
For example, if you have a banking application with high transaction rates and a lot of data in certain tables, in what types of queries would nolock be okay? Are there cases when you should always use it/never use it?