Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 365/1254 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • Sql Trigger Trouble

    - by SImon
    Hey guys i cant get this trigger to work, ive worked on it for an hour or so and cant see to figure out where im going wrong, any help would be appreciated CREATE OR REPLACE TRIGGER allergy BEFORE INSERT ON DECLARE med VARCHAR2(20); BEGIN SELECT v.medication RCD.specify INTO med FROM visit v, relcondetails RCD WHERE :new.medication = v.medication AND RCD.specifiy = 'allergies'; IF med = allergies THEN RAISE_APPLICATION_ERROR(-20000, 'Patient Is alergic to this medication'); END IF; END allergy; When put into oracle ERROR at line 6: ORA-04079: invalid trigger specification

    Read the article

  • How to remove duplicate records in a table?

    - by Mason Wheeler
    I've got a table in a testing DB that someone apparently got a little too trigger-happy on when running INSERT scripts to set it up. The schema looks like this: ID UNIQUEIDENTIFIER TYPE_INT SMALLINT SYSTEM_VALUE SMALLINT NAME VARCHAR MAPPED_VALUE VARCHAR It's supposed to have a few dozen rows. It has about 200,000, most of which are duplicates in which TYPE_INT, SYSTEM_VALUE, NAME and MAPPED_VALUE are all identical and ID is not. Now, I could probably make a script to clean this up that creates a temporary table in memory, uses INSERT .. SELECT DISTINCT to grab all the unique values, TRUNCATE the original table and then copy everything back. But is there a simpler way to do it, like a DELETE query with something special in the WHERE clause?

    Read the article

  • Multi table Triggers ms sql noob

    - by Chin
    I have a load of tables all with the same 2 datetime columns (lastModDate, dateAdded). I am wondering if I can set up global Insert Update trigger for these tables to set the datetime values. Or if not, what approaches are there? Any pointers much appreciated

    Read the article

  • SQL Server db_owner

    - by andrew007
    Hi, in my SQL2008 I have a user which is in the "db_datareader", "db_datawriter" and "db_ddladmin" DB roles, however when he tries to modify a table with SSMS he receives a message saying: You are not logged in as the database owner or system administrator. You might not be able to save changes to tables that you do not own. Of course, I would like to avoid such message, but until now I did find the way... Therefore, I try to modify the user by adding him to the "db_owner" role, and of course I do not have the message above. My question is: Is it possible to keep the user in the "db_owner" role, but deny some actions like alter user or ? I try "alter any user" securable on DB level, but it does not work... THANKS!

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • shreding xml column

    - by csetzkorn
    Hi, I have a XML column which contains XML like this: <Set> <Element> <ID> 1 </ID> <List> <ListElement> <Part1> ListElement 1 </Part1> </ListElement> <ListElement> <Part1> ListElement2 </Part1> </ListElement> </List> </Element> <Element> <ID> 2 </ID> <List> <ListElement> <Part1> ListElement3 </Part1> </ListElement> <ListElement> <Part1> ListElement4 </Part1> </ListElement> </List> </Element> </Set> I would like to shred this into a relation table containing this: ID, ListElement 1, ListElement1 1, ListElement2 2, ListElement3 2, ListElement4 I am able to obtain the content of the Parts using something like this: select List.value('(Part1/text())[1]', 'varchar(max)') as test from Table CROSS APPLY xml.nodes('// Element/List/ListElement') AS List(List) but I have not yet achieved to keep the ‘foreign key’ (the ID value). Thanks. Best wishes, Christian

    Read the article

  • Forming triangles from points and relations

    - by SiN
    Hello, I want to generate triangles from points and optional relations between them. Not all points form triangles, but many of them do. In the initial structure, I've got a database with the following tables: Nodes(id, value) Relations(id, nodeA, nodeB, value) Triangles(id, relation1_id, relation2_id, relation3_id) In order to generate triangles from both nodes and relations table, I've used the following query: INSERT INTO Triangles SELECT t1.id, t2.id , t3.id, FROM Relations t1, Relations t2, Relations t3 WHERE t1.id < t2.id AND t3.id > t1.id AND ( t1.nodeA = t2.nodeA AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeB OR t3.nodeA = t2.nodeB AND t3.nodeB = t1.nodeB) OR t1.nodeA = t2.nodeB AND (t3.nodeA = t1.nodeB AND t3.nodeB = t2.nodeA OR t3.nodeA = t2.nodeA AND t3.nodeB = t1.nodeB) ) It's working perfectly on small sized data. (~< 50 points) In some cases however, I've got around 100 points all related to each other which leads to thousands of relations. So when the expected number of triangles is in the hundreds of thousands, or even in the millions, the query might take several hours. My main problem is not in the select query, while I see it execute in Management Studio, the returned results slow. I received around 2000 rows per minute, which is not acceptable for my case. As a matter of fact, the size of operations is being added up exponentionally and that is terribly affecting the performance. I've tried doing it as a LINQ to object from my code, but the performance was even worse. I've also tried using SqlBulkCopy on a reader from C# on the result, also with no luck. So the question is... Any ideas or workarounds?

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • need help with a simple SQL update statement.

    - by Tony
    There's a field with type of varchar. It actually stores a float point string. Like 2.0 , 12.0 , 34.5 , 67.50 ... What I need is a update statement that remove the ending zeros of fields like 2.0 , 12.0 , change them to their integer representation , that is 2 , 12 ...,and leave 3.45 , 67.50 unchanged . How should I do this ? I am using oracle 10.

    Read the article

  • SQL Query Question ROW_CONCAT

    - by DaveC
    Hello Guy, I have been stuck on this problem for quite awhile... I hope someone out there can give me a hand. The following table is in my database: Product_ID Color Type 1 Red Leather 1 Silver Metal 1 Blue Leather 2 Orange Metal 2 Purple Metal I am trying to get the following output: Product_ID Type Color 1 Leather Red, Blue 1 Metal Silver 2 Metal Orange, Purple I know it has to do with some kind of double group by and a group_concat.... have been looking at this for an hour without figuring it out. Any help is much appreciated!!!

    Read the article

  • reclaim unsued space in sql 2008

    - by opensas
    I have a table with more than 300.000 records, that weights approximately 1.5 GB In that table I have three varchar(5000) fields, the rest are small fields I issue an update, setting those three fields to '' An after a shrink (database and files) the database uses almost the same space as before... DBCC SHRINKDATABASE(N'DataBase' ) DBCC SHRINKFILE (N'DataBase' , 1757) DBCC SHRINKFILE (N'DataBase_log' , 344) any idea?

    Read the article

  • sql rowlock on select statement

    - by David
    I have an ASP.Net webpage where the user selects a row for editing. I want to use the row lock on that row and once the user finishes the editing and updates another user can edit that row i.e. How can I use rowlock so that only one user can edit a row? Thank you

    Read the article

  • Test the sequentiality of a column with a single SQL query

    - by LauriE
    Hey, I have a table that contains sets of sequential datasets, like that: ID set_ID some_column n 1 'set-1' 'aaaaaaaaaa' 1 2 'set-1' 'bbbbbbbbbb' 2 3 'set-1' 'cccccccccc' 3 4 'set-2' 'dddddddddd' 1 5 'set-2' 'eeeeeeeeee' 2 6 'set-3' 'ffffffffff' 2 7 'set-3' 'gggggggggg' 1 At the end of a transaction that makes several types of modifications to those rows, I would like to ensure that within a single set, all the values of "n" are still sequential (rollback otherwise). They do not need to be in the same order according to the PK, just sequential, like 1-2-3 or 3-1-2, but not like 1-3-4. Due to the fact that there might be thousands of rows within a single set I would prefer to do it in the db to avoid the overhead of fetching the data just for verification after making some small changes. Also there is the issue of concurrency. The way locking in InnoDB (repeatable read) works (as I understand) is that if I have an index on "n" then InnoDB also locks the "gaps" between values. If I combine set_ID and n to a single index, would that eliminate the problem of phantom rows appearing? Looks to me like a common problem. Any brilliant ideas? Thanks! Note: using MySQL + InnoDB

    Read the article

  • SQL Stored Procedure

    - by Nathan
    I am trying to run a stored procedure with a while loop in it using Aqua Data Studio 6.5 and as soon as the SP starts Aqua Data starts consuming an increasing amount of my CPU's memory which makes absolutely no sense to me because everything should be off on the Sybase server I am working with. I have commented out and tested every piece of the SP and narrowed the issue down to the while loop. Can anyone explain to me what is going on? create procedure sp_check_stuff as begin declare @counter numeric (9), @max_id numeric (9), @exists numeric (1), @rows numeric (1) select @max_id = max(id) from my_table set @counter = 0 set @exists = 0 set @rows = 0 while @count <= @max_id begin //More logic which doesn't affect memory usage based //on commenting it out and running the SP set @counter = @counter + 1 set @exists = 0 set @rows = 0 end end return

    Read the article

  • Converting a selected date from a datetimepicker into my query, along with subtracting a day

    - by MyHeadHurts
    I am currently using this to get yesterdays date, however i need to do something similar where the user will use a javascript datetimepicker in my asp.net page and i will then use the date they select instead of just yesterdays date Declare @dayselection int set @dayselection = CONVERT(int,DateAdd(year, @YearToGet - Year(getdate() + 1), DateAdd(day, DateDiff(day, 1, @dayToGet), 1) , DateAdd(month, DateDiff(month, 0, @monthToGet), 0) ) but it isnt working i keep getting syntax errors I want the day and year functions to stay the same i just need help with the month part. I need to convert the selected date into an int

    Read the article

  • Multi table Triggers SQL Server noob

    - by Chin
    I have a load of tables all with the same 2 datetime columns (lastModDate, dateAdded). I am wondering if I can set up global Insert Update trigger for these tables to set the datetime values. Or if not, what approaches are there? Any pointers much appreciated

    Read the article

  • Select records by comparing subsets

    - by devnull
    Given two tables (the rows in each table are distinct): 1) x | y z 2) x | y z ------- --- ------- --- 1 | a a 1 | a a 1 | b b 1 | b b 2 | a 1 | c 2 | b 2 | a 2 | c 2 | b 2 | c Is there a way to select the values in the x column of the first table for which all the values in the y column (for that x) are found in the z column of the second table? In case 1), expected result is 1. If c is added to the second table then the expected result is 2. In case 2), expected result is no record since neither of the subsets in the first table matches the subset in the second table. If c is added to the second table then the expected result is 1, 2. I've tried using except and intersect to compare subsets of first table with the second table, which works fine, but it takes too long on the intersect part and I can't figure out why (the first table has about 10.000 records and the second has around 10). EDIT: I've updated the question to provide an extra scenario.

    Read the article

  • performing auditing in java with sql server DB - before and/or after do not get audited

    - by Domingos
    When auditing, sometimes the before value does not get audited, other times the after value does not get audited, other times both values do not get audited at all. After researching, I found out that only values from a specific codes table get audited. the code was: compareCodesTableInteger(audit, int, int, objectBefore, objectAfter, stringDescription, stringCodesTable); I then changed it to: compareCodesTableInteger(audit, int, int, objectBefore, objectAfter, stringDescription, booleanCheck ? stringCodesTableIfTrue : stringCodesTableIfFalse); Description: if objectBefore AND objectAfter are both from stringCodesTableIfTrue OR from stringCodesTableIfFalse, auditing takes place as expected. The problem is: most of the times, objectBefore is from stringCodesTableIfTrue, and objectAfter is from stringCodesTableIfFalse, or vice-versa. In this scenario auditing fails. How do I go around this? Please assist

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >