Search Results

Search found 99645 results on 3986 pages for 'sql server 2005'.

Page 133/3986 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • converting date data type into varchar

    - by Sheetal Inani
    I have some dates fields in table. These columns contain dates in the following format: mmddyy For example: 31/12/2010 00:00:00:0000 I need to import these values into a table which is set to varchar and numeric and formats dates like this: monthName varchar Year numeric(4,0) currently I'm using INSERT INTO [School].[dbo].[TeacherAttendenceDet] ([TeacherCode], [MonthName], [Year]) (SELECT MAX(employeecode), Datename(MONTH, dateofjoining) AS MONTH, Datepart(YEAR, dateofjoining) AS DATE FROM employeedet GROUP BY dateofjoining) but datename() gives result in date format.. I have to save it in varchar format How can I do this? this is employeemast table: EmployeeCode numeric(5, 0) PayScaleCode numeric(7, 0) DesignationCode varchar(50) CityCode numeric(5, 0) EmployeeName varchar(50) FatherName varchar(50) BirthDate varchar(50) DateOfJoining varchar(50) Address varchar(150) this is TeacherAttendenceDet table TeacherCode numeric(5, 0) Unchecked Year numeric(4, 0) Unchecked MonthName varchar(12) Unchecked i have to insert in teacherattendencedet table the monthname and year from employeemast

    Read the article

  • Stored proc executes >30 secs when called from website, but <1 sec when called from ssms

    - by Blootac
    I have a stored procedure that is called by a website to display data. Today the web page has started timing out so I got profiler going and saw the query that was taking too long. I then ran the same query in management studio, under the same user login, and it takes less than a second to return. Is there anything obvious that could be causing this? I can't think of a reason why when ASP calls the stored proc it takes 30 secs but when I call it it's fine. Thanks

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • how can i substitute a NULL value for a 0 in an SQL Query result

    - by Name.IsNullOrEmpty
    SELECT EmployeeMaster.EmpNo, Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays FROM EmployeeMaster FULL OUTER JOIN LeaveApplications ON EmployeeMaster.id = LeaveApplications.EmployeeRecordID INNER JOIN LeaveMaster ON EmployeeMaster.id = LeaveMaster.EmpRecordID GRoup BY EmployeeMaster.EmpNo order by LeaveDays Desc with the above query, if an employee has no leave application record in table LeaveApplications, then their Sum(LeaveApplications.LeaveDaysTaken) AS LeaveDays column returns NULL. What i would like to do is place a value of 0 (Zero) instead of NULL. I want to do this because i have a calculated column in the same query whose formular depends on the LeaveDays returned and when LeaveDays is NULL, the formular some how fails. Is there away i can put 0 for NULL such that that i can get my desired result.

    Read the article

  • SQL server partitioning

    - by durilai
    I have a table that has millions of records and we are looking at implementing table partitioning. Looking at it we have a foreign key "GroupID" that we would like to partition on. Is this possible? The Group will have more entries added to it, so as new GroupID's are added can the partition's be made dynamically?

    Read the article

  • Assign the results of a stored procedure into a variable in another stored procedure

    - by RHPT
    The title of this question is a bit misleading, but I couldn't summarize this very well. I have two stored procedures. The first stored procedure (s_proc1) calls a second stored procedure (s_proc2). I want to assign the value returned from s_proc2 to a variable in s_proc1. Currently, I'm calling s_proc2 (inside s_proc1) in this manner: EXEC s_proc2 @SiteID, @count = @PagingCount OUTPUT s_proc2 contains a dynamic query statement (for reasons I will not outline here). CREATE dbo.s_proc2 ( @siteID int, @count int OUTPUT ) AS DECLARE @sSQL nvarchar(100) DECLARE @xCount int SELECT @sSQL = 'SELECT COUNT(ID) FROM Authors' EXEC sp_ExecuteSQL @sSQL, N'@xCount int output', @xCount output SET @count = @xCount RETURN @count Will this result in @PagingCount having the value of @count? I ask because the result I am getting from s_proc1 is wonky. In fact, what I do get is two results. The first being @count, then the result of s_proc1 (which is incorrect). So, it makes me wonder if @PagingCount isn't being set properly. Thank you.

    Read the article

  • Trigger on database using a web application and a winform application

    - by Michael
    Hello all, Situation: I have a web application which shows errors and where you can accept those error messages. I also have a service, which checks errors from a system and sets the error messages in the database. When I accept an error in the web application, i would like the service to know which error message has been accepted, so that it can do some other actions. My guess is that this could be done through some sort of trigger, but i can't figure out how. Can anyone help me with this?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • SQL Analysis Services - Dimension attributes with a "many" cardinality

    - by MonkeyBrother
    I am creating a cube with the following tables: Customer CustomerID, Name Customer Rep CustomerID, RepID Rep RepID, Name The important thing here is that there is a many to many relationship between Reps and Customers. I want to be able to ask the question "How much sales for customers working with rep 'A'?" In the data source view i set up the relationships between both customerid columns and both repid columns. I set up the rep attribute in the dimension builder and when I try to build the cube I get this error: Errors in the high-level relationship engine. the 'Rep' table that is required for a join cannot be reached based on the relationships in the data source view.

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • execute stored procedure as another user premission

    - by StuffHappens
    Hello. I faced the following problem: there's a user who has to execute a stored porcedure (spTest). In spTest's body sp_trace_generateevent is called. sp_trace_generateevent requires alter trace permissions and I don't want user to have it. So I would like user to be able to execute spTest. How can I do that? Thank you for your help.

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

  • XML Import how would you do it?

    - by Rico
    XML is used as one of our main integration points. it comes over by many clients at a time but too many clients importing at the same time can slow down our database to a crawl. Someone has to have solved a problem like this. I am basically using VB to parse through the data and import what i want and don't want. Is there a better way?

    Read the article

  • Multipart Identifier And Functions

    - by The King
    Here is my Query... Here I'm using a function Fn_getStagesForProject()... For which I need to pass the SWProjectID from Projects Table... The function takes the ID as parameter and return all stages that corressponds to the project, on which I need to filer only the row that contains StageLevel as 0. Select A.SWProjectID, A.ShortTitle, C.StageName as StageName, B.ExpectedCompletionDate as BudgetedReleaseDate From Projects as A left outer join ProjectBudgets as B on A.SWProjectID = B.SWProjectID Left outer join Fn_getStagesForProject(Projects.SWProjectID) as C on B.StageID = C.StageID Where C.StageLevel = 0 The error is The multi-part identifier "Projects.SWProjectID" could not be bound. I tried changing it to A.SWProjectID, but I still get the error... Thanks in advance for your help. Let me know, incase you need the Table Structure Raja

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >