Search Results

Search found 27353 results on 1095 pages for 'teradata sql'.

Page 669/1095 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • MySQL - Limit a left join to the first date-time that occurs?

    - by John M
    Simplified table structure (the tables can't be merged at this time): TableA: dts_received (datetime) dts_completed (datetime) task_a (varchar) TableB: dts_started (datetime) task_b (varchar) What I would like to do is determine how long a task took to complete. The join parameter would be something like ON task_a = task_b AND dts_completed < dts_started The issue is that there may be multiple date-times that occur after the dts_completed. How do I create a join that only returns the first tableB-datetime that occurs after the tableA-datetime?

    Read the article

  • Determining child count of path

    - by sqlnewbie
    I have a table whose 'path' column has values and I would like to update the table's 'child_count' column so that I get the following output. path | child_count --------+------------- | 5 /a | 3 /a/a | 0 /a/b | 1 /a/b/c | 0 /b | 0 My present solution - which is way too inefficient - uses a stored procedure as follows: CREATE FUNCTION child_count() RETURNS VOID AS $$ DECLARE parent VARCHAR; BEGIN FOR parent IN SELECT path FROM my_table LOOP DECLARE tokens VARCHAR[] := REGEXP_SPLIT_TO_ARRAY(parent, '/'); str VARCHAR := ''; BEGIN FOR i IN 2..ARRAY_LENGTH(tokens, 1) LOOP UPDATE my_table SET child_count = child_count + 1 WHERE path = str; str := str || '/' || tokens[i]; END LOOP; END; END LOOP; END; $$ LANGUAGE plpgsql; Anyone knows of a single UPDATE statement that does the same thing?

    Read the article

  • Getting #Error value from ssrs reporting

    - by deepa
    I have created a dataset with fields "LastRunBuild" and "project" .The LastRunBuild field contain string of data seperated by commas according to each project. But Some Projects have no value in LastRunBuild field.When i am using this expression " iif(Fields!LastRunBuild.Value=nothing, nothing,Split(Fields!LastRunBuild.Value,",").GetValue(3)) " a #Error value returns every time. Please reply...

    Read the article

  • How can I find days between different paired rows?

    - by Anthony
    I've been racking my brain about how to do this in one query without PHP code. In a nutshell, I have a table that records email activity. For the sake of this example, here is the data: recipient_id activity date 1 delivered 2011-08-30 1 open 2011-08-31 2 delivered 2011-08-30 3 delivered 2011-08-24 3 open 2011-08-30 3 open 2011-08-31 The goal: I want to display to users a single number that tells how many recipients open their email within 24 hours. E.G. "Users that open their email within 24 hours: 13 Readers" In the case of the sample data, above, the value would be "1". (Recipient one was delivered an email and opened it the next day. Recipient 2 never opened it and recipient 3 waited 5 days.) Can anyone think of a way to express the goal in a single query? Reminder: In order to count, the person must have a 'delivered' tag and at least one 'open' tag. Each 'open' tag only counts once per recipient.

    Read the article

  • php and SQL_CALC_FOUND_ROWS

    - by Lizard
    I am trying to add the SQL_CALC_FOUND_ROWS into a query (Please note this isn't for pagination) please note I am trying to add this to a cakePHP query the code I currently have is below: return $this->find('all', array( 'conditions' => $conditions, 'fields'=>array('SQL_CALC_FOUND_ROWS','Category.*','COUNT(`Entity`.`id`) as `entity_count`'), 'joins' => array('LEFT JOIN `entities` AS Entity ON `Entity`.`category_id` = `Category`.`id`'), 'group' => '`Category`.`id`', 'order' => $sort, 'limit'=>$params['limit'], 'offset'=>$params['start'], 'contain' => array('Domain' => array('fields' => array('title'))) )); Note the 'fields'=>array('SQL_CALC_FOUND_ROWS',' this obviously doesn't work as It tries to apply the SQL_CALC_FOUND_ROWS to the table e.g. SELECTCategory.SQL_CALC_FOUND_ROWS, Is there anyway of doing this? Any help would be greatly appreciated, thanks.

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

  • SSRS Performance Mystery

    - by user101654
    I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?

    Read the article

  • Oracle - truncating a global temporary table

    - by superdario
    I am processing large amounts of data in iterations, each and iteration processes around 10-50 000 records. Because of such large number of records, I am inserting them into a global temporary table first, and then process it. Usually, each iteration takes 5-10 seconds. Would it be wise to truncate the global temporary table after each iteration so that each iteration can start off with an empty table? There are around 5000 iterations.

    Read the article

  • How effecient is a details table?

    - by Jeffrey Lott
    At my job, we have pseudo-standard of creating one table to hold the "standard" information for an entity, and a second table, named like 'TableNameDetails', which holds optional data elements. On average, for every row in the main table will have about 8-10 detail rows in it. My question is: What kind of performance impacts does this have over adding these details as additional nullable columns on the main table?

    Read the article

  • Join with table and sub query in oracle

    - by Amandeep
    I dont understand what is wrong with this query it is giving me compile time error of command not ended properly.The inner query is giving me 4 records can any body help me out. select WGN3EVENTPARTICIPANT.EVENTUID from (Select WGN_V_ADDRESS_1.ADDRESSUID1 as add1, WGN_V_ADDRESS_1.ADDRESSUID2 as add2 from WGN3USER inner join WGN_V_ADDRESS_1 on WGN_V_ADDRESS_1.USERID=wgn3user.USERID where WGN3USER.USERNAME='FIRMWIDE\khuraj' ) as ta ,WGN3EVENTPARTICIPANT where (ta.ADDRESSUID1=WGN3EVENTPARTICIPANT.ADDRESSUID1) AND (ta.ADDRESSUID2=WGN3EVENTPARTICIPANT.ADDRESSUID2) I am running it in oracle. Thanks Amandeep

    Read the article

  • How to structure data... Sequential or Hierarchical?

    - by Ryan
    I'm going through the exercise of building a CMS that will organize a lot of the common documents that my employer generates each time we get a new sales order. Each new sales order gets a 5 digit number (12222,12223,122224, etc...) but internally we have applied a hierarchy to these numbers: + 121XX |--01 |--02 + 122XX |--22 |--23 |--24 In my table for sales orders, is it better to use the 5 digital number as an ID and populate up or would it be better to use the hierarchical structure that we use when referring to jobs in regular conversation? The only benefit to not populating sequentially seems to be formatting the data later on in my view, but that doesn't sound like a good enough reason to go through the extra work. Thanks

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • What is the maximum length of a string parameter to Stored procedure?

    - by padmavathi
    I have a string of length 1,44,000 which has to be passed as a parameter to a stored procedure which is a select query on a table. When a give this is in a query (in c# ) its working fine. But when i pass it as a parameter to stored procedure its not working. Here is my stored procedure where in i have declared this parameter as NVARCHAR(MAX) ------------------------------------------------------ set ANSI_NULLS ON set QUOTED_IDENTIFIER ON go CREATE PROCEDURE [dbo].[ReadItemData](@ItemNames NVARCHAR(MAX),@TimeStamp as DATETIME) AS select * from ItemData where ItemName in (@ItemNames) AND TimeStamp=@TimeStamp --------------------------------------------------------------------- Here the parameter @ItemNames is a string concatinated with different names such as 'Item1','Item2','Item3'....etc. Can anyone tell what went wrong here? Thanks & Regards Padma

    Read the article

  • Search by nvarchar

    - by ziks
    Hi all. I have this problem. In table I have column which is nvarcar type. and row in this column is row1= 1;6 row2 = 12 row3 =6;5;67 etc... I try to search this column. for example when i send 1 i try to get only row1. I use LIKE but in result set I get row1 and row2. How can I achieved this, any help is appreciated. Tnx...

    Read the article

  • Help with query in Microsoft Access

    - by Gold
    I have 2 tables: Table A: code | name Table B: barcode | name Table B has full barcode and name, Table A has only code. I need to run update query that fill name in Table A. I tried something like: update A set name = (select top 1 Name from B where B.Code = mid(A.Barcode,1,8)) but it doesn't work.

    Read the article

  • Create an index only on certain rows in mysql

    - by dhruvbird
    So, I have this funny requirement of creating an index on a table only on a certain set of rows. This is what my table looks like: USER: userid, friendid, created, blah0, blah1, ..., blahN Now, I'd like to create an index on: (userid, friendid, created) but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index. Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice. How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index? I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table). I am basically looking for something like this: ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid

    Read the article

  • How to get only one record for each duplicate rows of the id in oracle?

    - by Psychocryo
    suppose i have this table: group_id | image | image_id | ----------------------------- 23 blob 1 23 blob 2 23 blob 3 21 blob 4 21 blob 5 25 blob 6 25 blob 7 how to get results of only 1 of each group id? in this case,there may be multiple images for one group id, i just want one result of each group_id i tried distinct but i will only get group_id. max for image also would not work.

    Read the article

  • sports league database design

    - by John
    Hello, I'm developing a database to store statistics for a sports league. I'd like to show several tables: - league table that indicates the position of the team in the current and previous fixture - table that shows the position of a team in every fixture in the championship I have a matches table: Matches (IdMatch, IdTeam1, IdTeam2, GoalsTeam1, GoalsTeam2) Whith this table I can calculate the total points of every team based on the matches the team played. But every time I want to show the league table I have to calculate the points. Also I have a problem to calculate in which position classified a team in the last 10 fixtures cause I have to make 10 queries. To store the league table for every fixture in a database table is another approach, but every time I change a match already played I have to recalculate every fixture from there... Is there a better approach for this problem? Thanks

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >