Search Results

Search found 99646 results on 3986 pages for 'sql server 2005 tempdb'.

Page 133/3986 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • shreding xml column

    - by csetzkorn
    Hi, I have a XML column which contains XML like this: <Set> <Element> <ID> 1 </ID> <List> <ListElement> <Part1> ListElement 1 </Part1> </ListElement> <ListElement> <Part1> ListElement2 </Part1> </ListElement> </List> </Element> <Element> <ID> 2 </ID> <List> <ListElement> <Part1> ListElement3 </Part1> </ListElement> <ListElement> <Part1> ListElement4 </Part1> </ListElement> </List> </Element> </Set> I would like to shred this into a relation table containing this: ID, ListElement 1, ListElement1 1, ListElement2 2, ListElement3 2, ListElement4 I am able to obtain the content of the Parts using something like this: select List.value('(Part1/text())[1]', 'varchar(max)') as test from Table CROSS APPLY xml.nodes('// Element/List/ListElement') AS List(List) but I have not yet achieved to keep the ‘foreign key’ (the ID value). Thanks. Best wishes, Christian

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Multi table Triggers SQL Server noob

    - by Chin
    I have a load of tables all with the same 2 datetime columns (lastModDate, dateAdded). I am wondering if I can set up global Insert Update trigger for these tables to set the datetime values. Or if not, what approaches are there? Any pointers much appreciated

    Read the article

  • using PIVOT to sql server.

    - by NoviceToDotNet
    This is the abstract idea which i want to do by my first select of first line, i am presenting here, but i am unable to do that correct. here category[0], category[2]..etc representing the category columns values...i know this kind of syntax not work, but i want to do something like this. SELECT category[0], category[1], category[2], category[3], category[4], category[5] FROM( select Row_number() OVER(ORDER BY (SELECT 1)) AS 'Serial Number', EP.FirstName,Ep.LastName, Ep.SignUpID, [dbo].[GetBookingRoleName](ES.UserId,EP.BookingRole) as RoleName, (select top 1 convert(varchar(10),eventDate,103)from [3rdi_EventDates] where EventId=@ItemId) as EventDate, (CASE [dbo].[GetBookingRoleName](ES.UserId,EP.BookingRole) WHEN 'Employee - Marketing' THEN 'DC' WHEN 'Employee - Accounting' THEN 'DC' WHEN 'Coaches' THEN 'DC' WHEN 'Student Client' THEN 'ST' WHEN 'Guest Doctor' THEN 'GDC' ---....more categories here, i just removed a few END) as Category from [3rdi_EventParticipants] as EP inner join [3rdi_EventSignup] as ES on EP.SignUpId = ES.SignUpId WHERE EP.EventId = @ItemId AND EP.PlaceStatus IN (0,3,4,8) and userid in( select distinct userid from userroles where roleid not in(19,20,21,22) and roleid not in(1,2, 25, 44)) (My Below Query) PIVOT(sum(First_Name+Last_Name)) FOR Category (category[0], category[1], category[2], category[3], category[4], category[5]) Group by (SignUpID)

    Read the article

  • integrity Constraints on a table.

    - by Dinesh
    See this sample schema Passenger(id PK, Name) Plane(id PK, capacity, type); Flight(id PK, planeId FK(Plane), flightDate, StartLocation, destination) CREATE TABLE Reservation(PassengerId, flightId, PRIMARY KEY (passengerId, flightId), FOREIGN KEY (passengerId) REFERENCES Passenger, FOREIGN KEY (flightId) REFERENCES Flight); I need to define an integrity constraint that enforces the restriction that the number of passengers on a plane cannot exceed the plane’s capacity. I have tried and achieved so far is this. CREATE TABLE Reservation( passengerId INTEGER, flightId INTEGER, PRIMARY KEY (passengerId, flightId), FOREIGN KEY (passengerId) REFERENCES Passenger, FOREIGN KEY (flightId) REFERENCES Flight, Constraint check1 check(Not Exists(select * from Flight s, (select count(*) as totalRes from Reservation group by flightId) t where t.totalRes > s.capacity ) ) ); I am not sure i am doing in right way or not. Any suggestions?

    Read the article

  • Converting a selected date from a datetimepicker into my query, along with subtracting a day

    - by MyHeadHurts
    I am currently using this to get yesterdays date, however i need to do something similar where the user will use a javascript datetimepicker in my asp.net page and i will then use the date they select instead of just yesterdays date Declare @dayselection int set @dayselection = CONVERT(int,DateAdd(year, @YearToGet - Year(getdate() + 1), DateAdd(day, DateDiff(day, 1, @dayToGet), 1) , DateAdd(month, DateDiff(month, 0, @monthToGet), 0) ) but it isnt working i keep getting syntax errors I want the day and year functions to stay the same i just need help with the month part. I need to convert the selected date into an int

    Read the article

  • Cannot resolve collation conflict in Union select

    - by phenevo
    Hi, I've got tqo queries: First doesn't work: select hotels.TargetCode as TargetCode from hotels union all select DuplicatedObjects.duplicatetargetCode as TargetCode from DuplicatedObjects where DuplicatedObjects.objectType=4 because I get error: Cannot resolve collation conflict for column 1 in SELECT statement. Second works: select hotels.Code from hotels where hotels.targetcode is not null union all select DuplicatedObjects.duplicatetargetCode as Code from DuplicatedObjects where DuplicatedObjects.objectType=4 Structure: Hotels.Code -PK nvarchar(40) Hotels.TargetCode - nvarchar(100) DuplicatedObjects.duplicatetargetCode PK nvarchar(100)

    Read the article

  • converting date data type into varchar

    - by Sheetal Inani
    I have some dates fields in table. These columns contain dates in the following format: mmddyy For example: 31/12/2010 00:00:00:0000 I need to import these values into a table which is set to varchar and numeric and formats dates like this: monthName varchar Year numeric(4,0) currently I'm using INSERT INTO [School].[dbo].[TeacherAttendenceDet] ([TeacherCode], [MonthName], [Year]) (SELECT MAX(employeecode), Datename(MONTH, dateofjoining) AS MONTH, Datepart(YEAR, dateofjoining) AS DATE FROM employeedet GROUP BY dateofjoining) but datename() gives result in date format.. I have to save it in varchar format How can I do this? this is employeemast table: EmployeeCode numeric(5, 0) PayScaleCode numeric(7, 0) DesignationCode varchar(50) CityCode numeric(5, 0) EmployeeName varchar(50) FatherName varchar(50) BirthDate varchar(50) DateOfJoining varchar(50) Address varchar(150) this is TeacherAttendenceDet table TeacherCode numeric(5, 0) Unchecked Year numeric(4, 0) Unchecked MonthName varchar(12) Unchecked i have to insert in teacherattendencedet table the monthname and year from employeemast

    Read the article

  • Stored proc executes >30 secs when called from website, but <1 sec when called from ssms

    - by Blootac
    I have a stored procedure that is called by a website to display data. Today the web page has started timing out so I got profiler going and saw the query that was taking too long. I then ran the same query in management studio, under the same user login, and it takes less than a second to return. Is there anything obvious that could be causing this? I can't think of a reason why when ASP calls the stored proc it takes 30 secs but when I call it it's fine. Thanks

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • SQL server partitioning

    - by durilai
    I have a table that has millions of records and we are looking at implementing table partitioning. Looking at it we have a foreign key "GroupID" that we would like to partition on. Is this possible? The Group will have more entries added to it, so as new GroupID's are added can the partition's be made dynamically?

    Read the article

  • how can i substitute a NULL value for a 0 in an SQL Query result

    - by Name.IsNullOrEmpty
    SELECT EmployeeMaster.EmpNo, Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays FROM EmployeeMaster FULL OUTER JOIN LeaveApplications ON EmployeeMaster.id = LeaveApplications.EmployeeRecordID INNER JOIN LeaveMaster ON EmployeeMaster.id = LeaveMaster.EmpRecordID GRoup BY EmployeeMaster.EmpNo order by LeaveDays Desc with the above query, if an employee has no leave application record in table LeaveApplications, then their Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays column returns NULL. What i would like to do is place a value of 0 (Zero) instead of NULL. I want to do this because i have a calculated column in the same query whose formular depends on the LeaveDays returned and when LeaveDays is NULL, the formular some how fails. Is there away i can put 0 for NULL such that that i can get my desired result.

    Read the article

  • Assign the results of a stored procedure into a variable in another stored procedure

    - by RHPT
    The title of this question is a bit misleading, but I couldn't summarize this very well. I have two stored procedures. The first stored procedure (s_proc1) calls a second stored procedure (s_proc2). I want to assign the value returned from s_proc2 to a variable in s_proc1. Currently, I'm calling s_proc2 (inside s_proc1) in this manner: EXEC s_proc2 @SiteID, @count = @PagingCount OUTPUT s_proc2 contains a dynamic query statement (for reasons I will not outline here). CREATE dbo.s_proc2 ( @siteID int, @count int OUTPUT ) AS DECLARE @sSQL nvarchar(100) DECLARE @xCount int SELECT @sSQL = 'SELECT COUNT(ID) FROM Authors' EXEC sp_ExecuteSQL @sSQL, N'@xCount int output', @xCount output SET @count = @xCount RETURN @count Will this result in @PagingCount having the value of @count? I ask because the result I am getting from s_proc1 is wonky. In fact, what I do get is two results. The first being @count, then the result of s_proc1 (which is incorrect). So, it makes me wonder if @PagingCount isn't being set properly. Thank you.

    Read the article

  • Trigger on database using a web application and a winform application

    - by Michael
    Hello all, Situation: I have a web application which shows errors and where you can accept those error messages. I also have a service, which checks errors from a system and sets the error messages in the database. When I accept an error in the web application, i would like the service to know which error message has been accepted, so that it can do some other actions. My guess is that this could be done through some sort of trigger, but i can't figure out how. Can anyone help me with this?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • SQL Analysis Services - Dimension attributes with a "many" cardinality

    - by MonkeyBrother
    I am creating a cube with the following tables: Customer CustomerID, Name Customer Rep CustomerID, RepID Rep RepID, Name The important thing here is that there is a many to many relationship between Reps and Customers. I want to be able to ask the question "How much sales for customers working with rep 'A'?" In the data source view i set up the relationships between both customerid columns and both repid columns. I set up the rep attribute in the dimension builder and when I try to build the cube I get this error: Errors in the high-level relationship engine. the 'Rep' table that is required for a join cannot be reached based on the relationships in the data source view.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >