Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 662/1102 | < Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >

  • running a stored procedure inside a sql trigger

    - by Ying
    Hi all, the business logic in my application requires me to insert a row in table X when a row is inserted in table Y. Furthermore, it should only do it between specific times(which I have to query another table for). I have considered running a script every 5 minutes or so to check this, but I stumbled upon triggers and figured this might be a better way to do it. But I find the syntax for procedures a little bewildering and I keep getting an error I have no idea how to fix. Here is where I start: CREATE TRIGGER reservation_auto_reply AFTER INSERT ON reservation FOR EACH ROW BEGIN IF NEW.sent_type = 1 /* In-App */ THEN INSERT INTO `messagehistory` (`trip`, `fk`, `sent_time`, `status`, `message_type`, `message`) VALUES (NEW.trip, NEW.psk, 'NOW()', 'submitted', 4, 'This is an automated reply to reservation'); END; I get an error in the VALUES part of the statmenet but im not sure where. I still have to query the other table for the information I need, but I can't even get past this part. Any help is appreciated, including links to many examples..Thanks

    Read the article

  • Alternative to subqueries

    - by Juanma
    I'm using Mysql 5.1, and have this query, is there a way to not use the subqueries and accomplish the same result? SELECT oref.affiliate_id, ROUND(sum( oph.amount ) * 0.10 ,2) AS tsum FROM operators_referer AS oref LEFT JOIN operators_payments_history AS oph ON oref.operator_id = oph.operator_id WHERE oref.affiliate_id = 28221 AND ( oph.date_paid > ( SELECT MAX(aph.date_paid) FROM affiliates_payments_history AS aph WHERE aph.operator_id = oref.affiliate_id ) OR ( SELECT MAX(aph.date_paid) FROM affiliates_payments_history AS aph WHERE aph.operator_id = oref.affiliate_id ) is NULL )

    Read the article

  • Accessing both stored procedure output parameters AND the result set in Entity Framework?

    - by MS.
    Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model? I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set). Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler. I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches? Edit My code currently looks like public IEnumerable<FooBar> GetFooBars( int? param1, string param2, DateTime from, DateTime to, out DateTime? createdDate, out DateTime? deletedDate) { var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime)); var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime)); var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam); createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ? null : createdDateParam.Value); deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ? null : deletedDateParam.Value); return fooBars; }

    Read the article

  • Querying using table-valued parameter

    - by antmx
    I need help please with writing a sproc, it takes a table-valued parameter @Locations, whose Type is defined as follows: CREATE TYPE [dbo].[tvpLocation] AS TABLE( [CountryId] [int] NULL, [ResortName] [nvarchar](100) NULL, [Ordinal] [int] NOT NULL, PRIMARY KEY CLUSTERED ( [Ordinal] ASC )WITH (IGNORE_DUP_KEY = OFF) ) @Locations will contain at least 1 row. Each row WILL have a non-null CountryId, and MAY have a non-null ResortName. Each row will have a unique Ordinal, the first being 0. The combinations of CountryId and ResortName in @Locations will be unique. The sproc needs to search against the following table structure. The image can be seen better by right-clicking it and View Image, or similar depending on your browser. Now this is where I'm stuck, the sproc should be able to find Tours where: The Tour's 1st TourHotel (Ordinal 0) has the same CountryId (and ResortName if specified) of the 1st row of @Locations (Ordinal 0). And also if @Locations has 1 row, the Tour must have additional TourHotels, ALL of which must be in the remaining CountryIds (and ResortNames if specified) of these remaining @Locations rows. Edit This is the code I finally used, based on Anthony Faull's suggestion. Thank you so much Anthony: select distinct T.Id from tblTour T join tblTourHotel TH on TH.TourId = T.Id join tblHotel H ON H.Id = TH.HotelId JOIN @Locations L ON ( ( L.Ordinal = 0 AND TH.Ordinal = 0 ) OR ( L.Ordinal > 0 AND TH.Ordinal > 0 ) ) AND L.CountryId = H.CountryId AND ( L.ResortName = H.ResortName OR L.ResortName IS NULL ) cross apply( select COUNT(TH2.Id) AS [Count] FROM tblTourHotel TH2 where TH2.TourId = TH.TourId ) TourHotelCount where TourHotelCount.[Count] = @LocationCount group by T.Id, T.TourRef, T.Description, T.DepartureDate, T.NumNights, T.DepartureAirportId, T.DestinationAirportId, T.AirlineId, T.FEPrice having COUNT(distinct TH.Id) = @LocationCount

    Read the article

  • Inexplicably slow query in MySQL

    - by Brandon M.
    Given this result-set: mysql> EXPLAIN SELECT c.cust_name, SUM(l.line_subtotal) FROM customer c -> JOIN slip s ON s.cust_id = c.cust_id -> JOIN line l ON l.slip_id = s.slip_id -> JOIN vendor v ON v.vend_id = l.vend_id WHERE v.vend_name = 'blahblah' -> GROUP BY c.cust_name -> HAVING SUM(l.line_subtotal) > 49999 -> ORDER BY c.cust_name; +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ | 1 | SIMPLE | v | ref | PRIMARY,idx_vend_name | idx_vend_name | 12 | const | 1 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | l | ref | idx_vend_id | idx_vend_id | 4 | csv_import.v.vend_id | 446 | | | 1 | SIMPLE | s | eq_ref | PRIMARY,idx_cust_id,idx_slip_id | PRIMARY | 4 | csv_import.l.slip_id | 1 | | | 1 | SIMPLE | c | eq_ref | PRIMARY,cIndex | PRIMARY | 4 | csv_import.s.cust_id | 1 | | +----+-------------+-------+--------+---------------------------------+---------------+---------+----------------------+------+----------------------------------------------+ 4 rows in set (0.04 sec) I'm a bit baffled as to why the query referenced by this EXPLAIN statement is still taking about a minute to execute. Isn't it true that this query only has to search through 449 rows? Anyone have any idea as to what could be slowing it down so much?

    Read the article

  • What is the current status on Microsoft ProClarity?

    - by Ali_Abadani
    I don't really know how to compose this question. My complay has been using Microsoft ProClarity for few years and we have a quite a few users using it publising books and doing ad-hoc analysis. With the new Microsoft BI solutions, it seems like they are completely going away from ProClarity and replacing the OLAP analysis with Excel. I understand the with SharePoint and integration with PerformancePoint and reporting services the dashboards and reports would be done in SharePoint but what about the analysis? Any ideas?

    Read the article

  • asp.net mvc checkbox hierarchy

    - by mazhar
    I want to create a checkboxes hierarchy like this in mvc2.How would I be able to achieve this in the most simplest manner. Administrator Manage User Add Edit Delete View Manage Feature Add Edit Delete View Moderator Manage User Add Edit Delete View Manage Feature Add Edit Delete View

    Read the article

  • How do you write a recursive stored procedure

    - by Grayson Mitchell
    I simply want a stored procedure that calculates a unique id and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the sp to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this sp also. ALTER PROCEDURE [dbo].[DataContainer_Insert] @SomeData varchar(max), @DataContainerId int out AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; BEGIN TRY SELECT @UserId = MAX(UserId) From DataContainer INSERT INTO DataContainer (UserId, SomeData) VALUES (@UserId, SomeData) SELECT @DataContainerId = scope_identity() END TRY BEGIN CATCH --try again exec DataContainer_Insert @DataContainerId, @SomeData END CATCH END

    Read the article

  • custom sorting or ordering a table without resorting the whole shebang

    - by fuugus
    for ten years we've been using the same custom sorting on our tables, i'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and would'nt like to have our replication replicate unnecessary entries. i had a look into nested sets, but it does'nt seem to do the job for us. base table: id | a_sort ---+------- 1 10 2 20 3 30 after inserting insert into table (a_sort) values(15) an entry at the second position. id | a_sort ---+------- 1 10 2 20 3 30 4 15 ordering the table with select * from table order by a_sort and resorting all the a_sort entries, updating at least id=(2,3,4) will of course produce the desired output id | a_sort ---+------- 1 10 4 20 2 30 3 40 the column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem. also we've found some pretty neat ways to do this task fast. only; how the heck can we reduce the updates in the db to 1 or 2 max. seems like an awfully common problem. the captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".. but this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)

    Read the article

  • Update all table rows but top N in Mysql

    - by arthurprs
    I was trying to run the following query UPDATE blog_post SET `thumbnail_present`=0, `thumbnail_size`=0, `thumbnail_data`='' WHERE `blog_post` NOT IN ( SELECT `blog_post` FROM blog_post ORDER BY `blog_post` DESC LIMIT 10) But Mysql doesn't allow 'LIMIT' in an 'IN' subquery. I think I can make a select to count the table rows and then make an ordered update limited by 'COUNT - 10', but I was wondering if there is a better way. Thanks in advance.

    Read the article

  • Return multiple IDs from a function and use the result in a query

    - by NewK
    I have this function that returns me all children of a tree node: CREATE OR REPLACE FUNCTION fn_category_get_childs_v2(id_pai integer) RETURNS integer[] AS $BODY$ DECLARE ids_filhos integer array; BEGIN SELECT array ( SELECT category_id FROM category WHERE category_id IN ( (WITH RECURSIVE parent AS ( SELECT category_id , parent_id from category WHERE category_id = id_pai UNION ALL SELECT t.category_id , t.parent_id FROM parent INNER JOIN category t ON parent.category_id = t.parent_id ) SELECT category_id FROM parent WHERE category_id <> id_pai ) ) ) into ids_filhos; return ids_filhos; END; and I would like to use it in a select statement like this: select * from teste1_elements where category_id in (select * from fn_category_get_childs_v2(12)) I've also tried this way with the same result: select * from teste1_elements where category_id=any(select * from fn_category_get_childs_v2(12))) But I get the following error: ERROR: operator does not exist: integer = integer[] LINE 1: select * from teste1_elements where category_id in (select *... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The function returns an integer array, is that the problem? SELECT * from fn_category_get_childs_v2(12) retrieves the following array (integer[]): '{30,32,34,20,19,18,17,16,15,14}'

    Read the article

  • Some help needed with a SQL query

    - by Psyche
    Hello, I need some help with a MySQL query. I have two tables, one with offers and one with statuses. An offer can has one or more statuses. What I would like to do is get all the offers and their latest status. For each status there's a table field named 'added' which can be used for sorting. I know this can be easily done with two queries, but I need to make it with only one because I also have to apply some filters later in the project. Here's my setup: CREATE TABLE `test`.`offers` ( `id` INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , `client` TEXT NOT NULL , `products` TEXT NOT NULL , `contact` TEXT NOT NULL ) ENGINE = MYISAM ; CREATE TABLE `statuses` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `offer_id` int(11) NOT NULL, `options` text NOT NULL, `deadline` date NOT NULL, `added` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1

    Read the article

  • Best way to run multiple queries per second on database, performance wise?

    - by Michael Joell
    I am currently using Java to insert and update data multiple times per second. Never having used databases with Java, I am not sure what is required, and how to get the best performance. I currently have a method for each type of query I need to do (for example, update a row in a database). I also have a method to create the database connection. Below is my simplified code. public static void addOneForUserInChannel(String channel, String username) throws SQLException { Connection dbConnection = null; PreparedStatement ps = null; String updateSQL = "UPDATE " + channel + "_count SET messages = messages + 1 WHERE username = ?"; try { dbConnection = getDBConnection(); ps = dbConnection.prepareStatement(updateSQL); ps.setString(1, username); ps.executeUpdate(); } catch(SQLException e) { System.out.println(e.getMessage()); } finally { if(ps != null) { ps.close(); } if(dbConnection != null) { dbConnection.close(); } } } And my DB connection private static Connection getDBConnection() { Connection dbConnection = null; try { Class.forName(DB_DRIVER); } catch (ClassNotFoundException e) { System.out.println(e.getMessage()); } try { dbConnection = DriverManager.getConnection(DB_CONNECTION, DB_USER,DB_PASSWORD); return dbConnection; } catch (SQLException e) { System.out.println(e.getMessage()); } return dbConnection; } This seems to be working fine for now, with about 1-2 queries per second, but I am worried that once I expand and it is running many more, I might have some issues. My questions: Is there a way to have a persistent database connection throughout the entire run time of the process? If so, should I do this? Are there any other optimizations that I should do to help with performance? Thanks

    Read the article

  • Question on different ways to link tables

    - by dotnetdev
    What is the difference between linking two tables and then the PK is an FK in the other table, but the FK has not got the primary key option (so it does not have the gold key), and having the PK in one table as a PK in another table? Am I right to think that the second option is for a many-to-many relationship? Thanks

    Read the article

  • Help me choose between XML or SQL Lite on android

    - by Ngetha
    I have an android app that periodically, say once a week downloads content from a server in XML. The content is used by the app, different Acitivities use different parts of the content. My question is a design one, should I save the data in SQlite or just keep it as an XML file, which one would be faster to read? The app can only use one content piece at a time, which means subsequent XML content downloads replace the old one.

    Read the article

  • ASP.MVC ModelBinding Behaviour

    - by OldBoy
    This one has me stumped, despite the numerous posts on here. The scenario is a basic MVC(2) web application with simple CRUD operations. Whenever the edit form is submitted and the UpdateModel() called, an exception is thrown: System.Data.Linq.ForeignKeyReferenceAlreadyHasValueException was unhandled by user code This occurs against a DropDownList value which is a foreign key on the entity table. However, there is another DropDownList list on the form, representing another foreign key, which does not throw the error (unsurprisingly). Changing the property values manually inside the Edit Action: Recipe recipe = repository.GetRecipe(int.Parse(formValues["recipeid"])); recipe.CategoryId = Convert.ToInt32(formValues["CategoryId"].ToString()); recipe.Page = int.Parse(formValues["Page"].ToString()); recipe.PublicationId=Convert.ToInt32(formValues["PublicationId"].ToString()); Allows the CategoryId and Page properties to be updated, and then the error is thrown on the PublicationId. All of the referential integrity is checked an the same in the db and the dbml. Any light shed on this would be most welcome.

    Read the article

  • Getting "too many rows" error inside a "for" cursor loop

    - by Will
    I have a trigger that contains two cursors loops, one nested inside the other like this: FOR outer_rec IN outer_cursor LOOP FOR inner_rec IN inner_cursor LOOP -- Do some calculations END LOOP; END LOOP; Somewhere in this it is throwing the following error: ORA-01422: exact fetch returns more than requested number of rows I've been trying to determine where it's coming from for an hour or so.. but should this error never happen? Also.. I am assuming the inner loop automatically closes and opens itself again every time the outer loop goes the next record, i hope this is correct.

    Read the article

  • INSERT 0..n records into table 'A' based on content of table 'B' in MySql 5

    - by Robert Gowland
    Using MySql 5, I have a task where I need to update one table based on the contents of another table. For example, I need to add 'A1' to table 'A' if table 'B' contains 'B1'. I need to add 'A2a' and 'A2b' to table 'A' if table 'B' contains 'B2', etc.. In our case, the value in table 'B' we're interested is an enum. Right now I have a stored procedure containing a series of statements like: INSERT INTO A SELECT 'A1' FROM B WHERE B.Value = 'B1'; --Repeat for 'B2' -> 'A2a'; 'B2' -> 'A2b'; 'B3' -> 'A3', etc... Is there a nicer more DRY way of accomplishing this? Edit: There may be values in table 'B' that have no equivalent value for table 'A'.

    Read the article

  • Crash when checking BOF property of pessimistic locked ADO recordset

    - by Patrick
    Bit of an odd one for you: I've got two connections to a database, on one I've opened a _RecordsetPtr with a pessimistic lock. I can no longer send an UPDATE command on the other connection. I can send a SELECT command on the second connection and data is returned. If I use a read only lock then there are no problems however when I use a pessimistic lock on the second connection as well I can check the State == adStateOpen but the program hangs when I test the BOF property! If I don't test the BOF property and try to call moveNext on the second connection the software hangs If I do neither of these I am able to access the data via the second connection but trying to access the data from the first connection causes the software to hang. Any one seen anything similar as I'm a bit stuck? EDIT : it wasn't hanging, someone had put a 30 minute timeout on the connection and I wasn't waiting that long while testing...

    Read the article

  • Load Empty Database table

    - by john White
    I am using SQLexpress and VS2008. I have a DB with a table named "A", which has an IdentitySpecification column named ID. The ID is auto-incremented. Even if the row is deleted, the ID still increases. After several data manipulation, the current ID has reached 15, for example. When I run the application if there's at least 1 row: if I add a new row, the new ID is 16. Everything is fine. If the table is empty (no row): if I add a new row, the new ID is 0, which is an error (I think). And further data manipulation (eg. delete or update) will result in an unhandled exception. Has anyone encountered this? PS. In my table definition, the ID has been selected as follow: Identity Increment = 1; Identity Seed =1; The DB load code is: dataSet = gcnew DataSet(); dataAdapter->Fill(dataSet,"A"); dataTable=dataSet->Tables["A"]; dbConnection->Open(); The Update button method dataAdapter->Update(dataSet,"tblInFlow"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["tblInFlow"]; dataGrid->DataSource=dataTable; If I press Update: if there's at least a row: the datagrid view updates and shows the table correctly. if there's nothing in the table (no data row), the Add method will add a new row, but from ID 0. If I close the program and restart it again: the ID would be 16, which is correct. This is the add method row=dataTable->NewRow(); row["column1"]="something"; dataTable->Rows->Add(row); dataAdapter->Update(dataSet,"A"); dataSet->AcceptChanges(); dataTable=dataSet->Tables["A"];

    Read the article

  • Using SQL Server Views with NHibernate

    - by colinramsay
    I have a site that sells cars. On the frontend, I want to only show cars that are published, and on the backend I want to show all cars. Whether a car is published or not depends on a number of factors, so I wanted to create a view to simplify this. My question is, can I reduce duplication by dynamically telling NHibernate to sometimes use the "PublishedCar" view and something use the "AllCar" view when querying/fetching Car entities?

    Read the article

< Previous Page | 658 659 660 661 662 663 664 665 666 667 668 669  | Next Page >