Search Results

Search found 27337 results on 1094 pages for 'trv sql'.

Page 598/1094 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • How to clean sys.conversation_endpoints

    - by Manjoor
    I have a table, a trigger on the table implemented using service broker. More than Half million records are inserted daily into the table. The asynchronous SP is used to check sveral condition by using inserted data and update other tables. It was running fine for last 1 month and the SP was get executed withing 2-3 seconds of insertion of record. But now it take more than 90 minute. At present sys.conversation_endpoints have too much records. (Note that all the table are truncated daily as I do not need those records day after) Other database activities are normal (average 60% CPU Utilization). Now where i need to look?? I can re-create database without any problem but i don't think it is a good way to resolve the problem

    Read the article

  • MySQL Stored Procedures not working with SELECT (basic question)

    - by TMG
    Hello, I am using a platform (perfectforms) that requires me to use stored procedures for most of my queries, and having never used stored procedures, I can't figure out what I'm doing wrong. The following statement executes without error: DELIMITER // DROP PROCEDURE IF EXISTS test_db.test_proc// CREATE PROCEDURE test_db.test_proc() SELECT 'foo'; // DELIMITER ; But when I try to call it using: CALL test_proc(); I get the following error: #1312 - PROCEDURE test_db.test_proc can't return a result set in the given context I am executing these statements from within phpmyadmin 3.2.4, PHP Version 5.2.12 and the mysql server version is 5.0.89-community. When I write a stored procedure that returns a parameter, and then select it, things work fine (e.g.): DELIMITER // DROP PROCEDURE IF EXISTS test_db.get_sum// CREATE PROCEDURE test_db.get_sum(out total int) BEGIN SELECT SUM(field1) INTO total FROM test_db.test_table; END // DELIMITER ; works fine, and when I call it: CALL get_sum(@t); SELECT @t; I get the sum no problem. Ultimately, what I need to do is have a fancy SELECT statement wrapped up in a stored procedure, so I can call it, and return multiple rows of multiple fields. For now I'm just trying to get any select working. Any help is greatly appreciated.

    Read the article

  • file handling in .net

    - by Indranil Mutsuddy
    I have developed an application in vb.net2008 and database sqlserver. Now i want to ignore the database (it has 1 table as customer(name,password,hour,minute)) as i dont want my client to install sqlserver separately or other overheads. I am planning to do the whole using file handling in vb.net(manipulating the data in files itself eg change username, password etc). As I am new i don't actually know the proper way and of course need assistance.

    Read the article

  • Recursive SQL giving ORA-01790

    - by PenFold
    Using Oracle 11g release 2, the following query gives an ORA-01790: expression must have same datatype as corresponding expression: with intervals(time_interval) AS (select trunc(systimestamp) from dual union all select (time_interval + numtodsinterval(10, 'Minute')) from intervals where time_interval < systimestamp) select time_interval from intervals; The error suggests that the datatype of both subqueries of the UNION ALL are returning different datatypes. Even if I cast to TIMESTAMP in each of the subqueries, then I get the same error. What am I missing?

    Read the article

  • Is the RESTORE process dependent on schema?

    - by Martin Aatmaa
    Let's say I have two database instances: InstanceA - Production server InstanceB - Test server My workflow is to deploy new schema changes to InstanceB first, test them, and then deploy them to InstanceA. So, at any one time, the instance schema relationship looks like this: InstanceA - Schema Version 1.5 InstanceB - Schema Version 1.6 (new version being tested) An additional part of my workflow is to keep the data in InstanceB as fresh as possible. To fulfill this, I am taking the database backups of InstanceA and applying them (restoring them) to InstanceB. My question is, how does schema version affect the restoral process? I know I can do this: Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.5 But can I do this? Backup InstanceA - Schema Version 1.5 Restore to InstanceB - Schema Version 1.6 (new version being tested) If no, what would the failure look like? If yes, would the type of schema change matter? For example, if Schema Version 1.6 differed from Schema Version 1.5 by just having an altered storec proc, I imagine that this type of schema change should't affect the restoral process. On the other hand, if Schema Version 1.6 differed from Schema Version 1.5 by having a different table definition (say, an additional column), I image this would affect the restoral process. I hope I've made this clear enough. Thanks in advance for any input!

    Read the article

  • Stop the user entering ' char

    - by Phil
    I have a search page where I would like to stop the user entering a ' into textboxes, or replace it with a suitable character. Can anyone help me achieve this in asp.net vb ? For example if a user searches for O'Reilly the search crashes with error: Line 1: Incorrect syntax near 'Reilly'. Unclosed quotation mark before the character string ' '. Thanks!

    Read the article

  • How to calculate the sum of a column in an MS Access table for a given date (a single day, month or year)

    - by cMinor
    I have a table in Access in a custom format saved as dd/MM/yyyy hh:mm:ss tt , Also A form in VB.NET 2010, I get a specific day, month and year with no problem but the problem comes when I want to query the sum of a column named value depending on a specific month or day or year.... The table is like: +-----+-----------+-------------------------+ | id | value | date | +-----+-----------+-------------------------+ | id1 | 1499 | 01/01/2012 07:30:11 p.m.| | id2 | 1509 | 11/02/2012 07:30:11 p.m.| | id3 | 1611 | 21/10/2012 07:30:11 p.m.| | id1 | 1115 | 11/10/2012 07:30:11 p.m.| | id1 | 1499 | 17/05/2012 07:30:11 p.m.| | id2 | 1709 | 11/06/2012 07:30:11 p.m.| | id3 | 1911 | 30/07/2012 07:30:11 p.m.| | id1 | 1015 | 01/08/2012 07:30:11 p.m.| | id1 | 1000 | 11/05/2012 07:30:11 p.m.| |+-----+-----------+------------------------+ So I Know the query SELECT SUM(value) FROM mytable WHERE date in='01/05/2012 00:00:00' ... How to tell the query I want the month of May so I would get 1499+1000= 2499 Or how to tell I want the year 2012 so I would get the sum of all the table Which would be the correct syntax...

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • postgresql: Fast way to update the latest inserted row

    - by Anonymous
    What is the best way to modify the latest added row without using a temporary table. E.g. the table structure is id | text | date My current approach would be an insert with the postgresql specific command "returning id" so that I can update the table afterwards with update myTable set date='2013-11-11' where id = lastRow However I have the feeling that postgresql is not simply using the last row but is iterating through millions of entries until "id = lastRow" is found. How can i directly access the last added row?

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • SQLce Select query problem

    - by DieHard
    Wrote a Truck show Contest voting app, financial etc using sqlite. decided to write backup app for show day using ce 3.5. Created db moved to data directory, created tables configured dgridviews all is well. Entered some test data started management studio 08 ran select query against table and got null returns. Started app from vs studio and found that test data is gone. Re entered data ran query in MS data gone again. If I use VS Studio can start and enter data, close app restart and data is still there, seems only when using outside tool on select query data deletes. I don't know ce that well but this cannot be right. select * from votes = delete * from votes??????????????

    Read the article

  • MySQL Column Value Pivot

    - by manyxcxi
    I have a MySQL InnoDB table laid out like so: id (int), run_id (int), element_name (varchar), value (text), line_order, column_order `MyDB`.`MyTable` ( `id` bigint(20) NOT NULL, `run_id` int(11) NOT NULL, `element_name` varchar(255) NOT NULL, `value` text, `line_order` int(11) default NULL, `column_order` int(11) default NULL It is used to store data generated by a Java program that used to output this in CSV format, hence the line_order and column_order. Lets say I have 2 entries (according to the table description): 1,1,'ELEMENT 1','A',0,0 2,1,'ELEMENT 2','B',0,1 I want to pivot this data in a view for reporting so that it would look like more like the CSV would, where the output would look this: --------------------- |ELEMENT 1|ELEMENT 2| --------------------- | A | B | --------------------- The data coming in is extremely dynamic; it can be in any order, can be any of over 900 different elements, and the value could be anything. The Run ID ties them all together, and the line and column order basically let me know where the user wants that data to come back in order.

    Read the article

  • Error: The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value

    - by CPM
    I know that there are simmilar questions like this on the forum, however I am still having problems to update a datetime field o the database. I dont get any problems when inserting but I get problems when updating and I am formating the same way , like this: e.Values.Item("SelectionStartDate") = Format(startdate, "yyyy-MM-dd") + " " + startTime1 + ".000" startTime is of type string. I have tried different solution that I came across on the internet but still get this error. Please help. Thanks in advance

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • MS ACCESS: How can i count distinct value using access query?

    - by Sadat
    here is the current complex query given below. SELECT DISTINCT Evaluation.ETCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.TVenue, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Answer.QCode, Answer.Answer, Count(Answer.Answer) AS [Count], Questions.SL, Questions.Question FROM ((Evaluation INNER JOIN Training ON Evaluation.ETCode=Training.TCode) INNER JOIN Answer ON Evaluation.ECode=Answer.ECode) INNER JOIN Questions ON Answer.QCode=Questions.QCode GROUP BY Evaluation.ETCode, Answer.QCode, Training.TTitle, Training.Tcomponent, Training.TImpliment_Partner, Training.Tvenue, Answer.Answer, Questions.Question, Training.TStartDate, Training.TEndDate, Evaluation.EDate, Questions.SL ORDER BY Answer.QCode, Answer.Answer; There is an another column Training.TCode. I need to count distinct Training.TCode, can anybody help me? If you need more information please let me know

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Modeling a Generic Relationship (expressed in C#) in a Database

    - by StevenH
    This is most likely one for all you sexy DBAs out there: How would I effieciently model a relational database whereby I have a field in an "Event" table which defines a "SportType"? This "SportsType" field can hold a link to different sports tables E.g. "FootballEvent", "RubgyEvent", "CricketEvent" and "F1 Event". Each of these Sports tables have different fields specific to that sport. My goal is to be able to genericly add sports types in the future as required, yet hold sport specific event data (fields) as part of my Event Entity. Is it possible to use an ORM such as NHibernate / Entity framework / DataObjects.NET which would reflect such a relationship? I have thrown together a quick C# example to express my intent at a higher level: public class Event<T> where T : new() { public T Fields { get; set; } public Event() { EventType = new T(); } } public class FootballEvent { public Team CompetitorA { get; set; } public Team CompetitorB { get; set; } } public class TennisEvent { public Player CompetitorA { get; set; } public Player CompetitorB { get; set; } } public class F1RacingEvent { public List<Player> Drivers { get; set; } public List<Team> Teams { get; set; } } public class Team { public IEnumerable<Player> Squad { get; set; } } public class Player { public string Name { get; set; } public DateTime DOB { get; set;} }

    Read the article

  • Best practices for querying an entire row in a database table? (MySQL / CodeIgniter)

    - by Walker
    Sorry for the novice question! I have a table called cities in which I have fields called id, name, xpos, ypos. I'm trying to use the data from each row to set a div's position and name. What I'm wondering is what's the best practice for dynamically querying an unknown amount of rows (I don't know how many cities there might be, I want to pull the information from all of them) and then passing the variables from the model into the view and then setting attributes with it? Right now I've 'hacked' a solution where I run a different function each time which pulls a value using a query ('SELECT id FROM cities;'), then I store that in a global array variable and pass it into view. I do this for each var so I have arrays called: city_idVar, city_nameVar, city_xposVar, city_yposVar then I know that the city_nameVar[0] matches up with city_xposVar[0] etc. Is there a better way?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • Round date to fiscal year

    - by Dave Jarvis
    The following database view rounds the date back to the closest fiscal year (April 1st): CREATE OR REPLACE VIEW FISCAL_YEAR_VW AS SELECT CASE WHEN to_number(to_char( SYSDATE, 'MM' )) < 4 THEN to_date('1-APR-'||to_char(add_months(SYSDATE, -12), 'YYYY'), 'dd-MON-yyyy') ELSE to_date('1-APR-'||to_char(SYSDATE, 'YYYY'), 'dd-MON-yyyy') END AS fiscal_year FROM dual; This allows us to calculate the current fiscal year based on today's date. How can this calculation be simplified or optimized?

    Read the article

  • How to define a query om a n-m table

    - by user559889
    Hi, I have some troubles defining a query. I have a Product and a Category table. A product can belong to multiple categories and vice versa so there is also a Product-Category table. Now I want to select all products that belong to a certain category. But if the user does not provide a category I want all products. I try to create a query using a join but this results in the product being selected multiple times if it belongs to multiple categories (in the case no specific category is queried). What kind of query do I have to create? Thanks

    Read the article

  • C# : DBConnection.Open() timeout is too long.

    - by leo
    Hi, I'm trying to connect to a server that the user inputs. When the server doesn't exist, I'd like to give a quick feedback to the end-user so he can correct what he's typed. Is there any way to test if a server exists before trying to connect ? Thanks

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >