Search Results

Search found 37457 results on 1499 pages for 'sql 2008 r2'.

Page 887/1499 | < Previous Page | 883 884 885 886 887 888 889 890 891 892 893 894  | Next Page >

  • What is your reporting tool of choice?

    - by jms
    Every project invariably needs some type of reporting functionality. From a foreach loop in your language of choice to a full blow BI platform. To get the job done what tools, widgets, platforms has the group used with success, frustration and failure?

    Read the article

  • How to select a subset of results from a select statement

    - by Ankur
    I have a table that stores RDF triples: triples(triple_id, sub_id, pre_id, obj_id) The method (I need to write) will receive an array of numbers which correspond to pre_id values. I want to select all sub_id values that have a corresponding pre_id for all the pre_ids in the array that is passed in. E.g. if I had a single pre_id values passed in... lets call the value passed in preId, I would do: select sub_id from triples where pre_id=preId; However since I have mutliple pre_id values I want to keep iterating through the pre_id values and only keep the sub_id values corresponding to the "triples" records that have both. E.g. image there are five records: triples(1, 34,65,23) triples(2, 31,35,28) triples(3, 32,32,19) triples(4, 12,65,28) triples(5, 76,32,34) If I pass in an array of pre_id values [65,32] then I want to select the first, third, fourth and fifth records. What would I do for that?

    Read the article

  • Query with multiple IN-statements but without the cartesian product

    - by Janne
    How could I make this kind of query e.g. in MySQL SELECT * FROM Table t WHERE t.a IN (1,2,3) AND t.b IN (4,5,6) AND t.c IN (7,8,9) ... so that the result would contain only the three rows: t.a|t.b|t.c ---+---+--- 1 | 4 | 7 2 | 5 | 8 3 | 6 | 9 The above query of course returns all the combinations of the values in the IN clauses but I would like to get just the ones where the first elements of each tuple match, second elements of each tuple match and so on. Is there any efficient way to do this? By the way is there some common term for this kind of query or concept? I'm having hard time coming up with the question's title because I can't put this into words..

    Read the article

  • Is it possible to modify the value of a record's primary key in Oracle when child records exist?

    - by Chris Farmer
    I have some Oracle tables that represent a parent-child relationship. They look something like this: create table Parent ( parent_id varchar2(20) not null primary key ); create table Child ( child_id number not null primary key, parent_id varchar2(20) not null, constraint fk_parent_id foreign key (parent_id) references Parent (parent_id) ); This is a live database and its schema was designed long ago under the assumption that the parent_id field would be static and unchanging for a given record. Now the rules have changed and we really would like to change the value of parent_id for some records. For example, I have these records: Parent: parent_id --------- ABC123 Child: child_id parent_id -------- --------- 1 ABC123 2 ABC123 And I want to modify ABC123 in these records in both tables to something else. It's my understanding that one cannot write an Oracle update statement that will update both parent and child tables simultaneously, and given the FK constraint, I'm not sure how best to update my database. I am currently disabling the fk_parent_id constraint, updating each table independently, and then enabling the constraint. Is there a better, single-step way to update this content?

    Read the article

  • Optimize INSERT / UPDATE / DELETE operation

    - by clime
    I wonder if the following script can be optimized somehow. It does write a lot to disk because it deletes possibly up-to-date rows and reinserts them. I was thinking about applying something like "insert ... on duplicate key update" and found some possibilities for single-row updates but I don't know how to apply it in the context of INSERT INTO ... SELECT query. CREATE OR REPLACE FUNCTION update_member_search_index() RETURNS VOID AS $$ DECLARE member_content_type_id INTEGER; BEGIN member_content_type_id := (SELECT id FROM django_content_type WHERE app_label='web' AND model='member'); DELETE FROM watson_searchentry WHERE content_type_id = member_content_type_id; INSERT INTO watson_searchentry (engine_slug, content_type_id, object_id, object_id_int, title, description, content, url, meta_encoded) SELECT 'default', member_content_type_id, web_member.id, web_member.id, web_member.name, '', web_user.email||' '||web_member.normalized_name||' '||web_country.name, '', '{}' FROM web_member INNER JOIN web_user ON (web_member.user_id = web_user.id) INNER JOIN web_country ON (web_member.country_id = web_country.id) WHERE web_user.is_active=TRUE; END; $$ LANGUAGE plpgsql; EDIT: Schemas of web_member, watson_searchentry, web_user, web_country: http://pastebin.com/3tRVPPVi. (content_type_id, object_id_int) in watson_searchentry is unique pair in the table but atm the index is not present (there is no use for it). This script should be run at most once a day for full rebuilds of search index.

    Read the article

  • custom sorting or ordering a table without resorting the whole shebang

    - by fuugus
    for ten years we've been using the same custom sorting on our tables, i'm wondering if there is another solution which involves fewer updates, especially since today we'd like to have a replication/publication date and would'nt like to have our replication replicate unnecessary entries. i had a look into nested sets, but it does'nt seem to do the job for us. base table: id | a_sort ---+------- 1 10 2 20 3 30 after inserting insert into table (a_sort) values(15) an entry at the second position. id | a_sort ---+------- 1 10 2 20 3 30 4 15 ordering the table with select * from table order by a_sort and resorting all the a_sort entries, updating at least id=(2,3,4) will of course produce the desired output id | a_sort ---+------- 1 10 4 20 2 30 3 40 the column names, the column count, datatypes, a possible join, possible triggers or the way the resorting is done is/are irrelevant to the problem. also we've found some pretty neat ways to do this task fast. only; how the heck can we reduce the updates in the db to 1 or 2 max. seems like an awfully common problem. the captain obvious in me thougth once "use an a_sort float(53), insert using a fixed value of ordervaluefirstentry+abs(ordervaluefirstentry-ordervaluenextentry)/2".. but this would only allow around 1040 "in between" entries - so never resorting seems a bit problematic ;)

    Read the article

  • Broken count(*) after adding LEFT JOIN

    - by Iain Urquhart
    Since adding the LEFT JOIN to the query below, the count(*) has been returning some strange values, it seems to have added the total rows returned in the query to the 'level': SELECT `n`.*, exp_channel_titles.*, round((`n`.`rgt` - `n`.`lft` - 1) / 2, 0) AS childs, count(*) - 1 + (`n`.`lft` > 1) + 1 AS level, ((min(`p`.`rgt`) - `n`.`rgt` - (`n`.`lft` > 1)) / 2) > 0 AS lower, (((`n`.`lft` - max(`p`.`lft`) > 1))) AS upper FROM `exp_node_tree_6` `n` LEFT JOIN `exp_channel_titles` ON (`n`.`entry_id`=`exp_channel_titles`.`entry_id`), `exp_node_tree_6` `p`, `exp_node_tree_6` WHERE `n`.`lft` BETWEEN `p`.`lft` AND `p`.`rgt` AND ( `p`.`node_id` != `n`.`node_id` OR `n`.`lft` = 1 ) GROUP BY `n`.`node_id` ORDER BY `n`.`lft` I'm totally stumped... Thank you!

    Read the article

  • Database: Pipelined Functions

    - by Rachel
    I am new to the concept of Pipeline Functions. I have some questions regarding From Database point of view: What actually is Pipeline function ? What is the advantage of using Pipeline Function ? What challenges are solved using Pipeline Function ? Are the any optimization advantages of using Pipeline Function ? Thanks.

    Read the article

  • how to copy from one column to another but with different format

    - by Bob
    I hv a table like this:- Item Model Remarks ----------------------------------------- A 10022009 B 10032006 C 05081997 I need to copy the info from "Model" to "Remarks" with the following format:- Item Model Remarks ----------------------------------------- A 10022009 20090210 B 10032006 20060310 C 05081997 19970805 Thanks

    Read the article

  • Question on different ways to link tables

    - by dotnetdev
    What is the difference between linking two tables and then the PK is an FK in the other table, but the FK has not got the primary key option (so it does not have the gold key), and having the PK in one table as a PK in another table? Am I right to think that the second option is for a many-to-many relationship? Thanks

    Read the article

  • Hibernate Criteria API: get n random rows

    - by hadrien
    I can't figure out how to fetch n random rows from a criteria instance: Criteria criteria = session.createCriteria(Table.class); criteria.add(Restrictions.eq('fieldVariable', anyValue)); ... Then what? I can't find any doc with Criteria API Does it mean I should use HQL instead? Thanx! EDIT: I get the number of rows by: int max = criteria.setProjecxtion(Projections.rowCount()).uniqueResult(); How do I fetch n random rows with indexes between 0 and max? Thx again!

    Read the article

  • How do I do a table join on two fields in my second table?

    - by Cannonade
    I have two tables: Messages - Amongst other things, has a to_id and a from_id field. People - Has a corresponding person_id I am trying to figure out how to do the following in a single linq query: Give me all messages that have been sent to and from person x (idself). I had a couple of cracks at this. Not quite right MsgPeople = (from p in db.people join m in db.messages on p.person_id equals m.from_id where (m.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); This almost works, except I think it misses one case: "people who have never received a message, just sent one to me" How this works in my head So what I really need is something like: join m in db.messages on (p.people_id equals m.from_id or p.people_id equals m.to_id) Gets me a subset of the people I am after It seems you can't do that. I have tried a few other options, like doing two joins: MsgPeople = (from p in db.people join m in AllMessages on p.person_id equals m.from_id join m2 in AllMessages on p.person_id equals m2.to_id where (m2.from_id == idself || m.to_id == idself) orderby p.name descending select p).Distinct(); but this gives me a subset of the results I need, I guess something to do with the order the joins are resolved. My understanding of LINQ (and perhaps even database theory) is embarrassingly superficial and I look forward to having some light shed on my problem.

    Read the article

  • Table in DB for generating primary keys?

    - by Sapphire
    Do you ever use a separate table for "generating" artificial primary keys for DB (and why)? What I mean is to have a table with two columns, table name and current ID - with which you could get new "ID" for some table by simply locking the row with that table name, getting the current value of the key, increment it by one, and unlock the row. Why would you prefer this over standard integer identity column? P.S. The "idea" is from Fowlers Patterns of Enterprise Application Architecture, btw...

    Read the article

  • How do I put data from multiple records into different columns?

    - by Bryan
    My two tables are titled analyzed and analyzedCopy3. I'm trying to put information from analyzedCopy3 into multiple columns in analyzed. Sample data from analyzedCopy3: readings_miu_id OriginalCol ColRSSIz 110001366 Frederick Road -108 110001366 Steel Street 110001366 Fifth Ave. 110001508 Steel Street -104 What I want to do is put the top 3 OriginalCol, ColRSSIz combinations into columns that I have in the table analyzed. In analyzed there is only one record for each unique readings_miu_id. Any ideas? Thanks in advance. Additional Info: By "top 3 OriginalCol, ColRSSIz combinations" I mean the first 3 combinations with the highest value in the ColRSSIz column. For any readings_miu_id there could be anywhere from 1 row of information to 6 rows of information. So at most I'm only wanting the top 3. If there is less than 3 rows for the readings_miu_id then the other columns need to be blank. Query that generates the table "analyzed": strSql4 = " SELECT readings_miu_id, Count(readings_miu_id) as NumberOfReads, First(PercentSuccessz) as PercentSuccess, First(Readingz)as Reading, First(MIUwindowz) as MIUwindow, First(SNz) as SN, First(Noisez) as Noise, First(RSSIz) as RSSI, First(ColRSSIz) as ColRSSI, First(MIURSSIz) as MIURSSI, First(Col1z) as Col1, First(Col1RSSIz) as Col1RSSI, First(Col2z) as Col2, First(Col2RSSIz) as Col2RSSI, First(Col3z) as Col3, First(Col3RSSIz) as Col3RSSI, First(Firmwarez) as Firmware, First(CFGDatez) as CFGDate, First(FreqCorrz) as FreqCorr, First(Activez) as Active, First(MeterTypez) as MeterType, First(OriginColz) as OriginCol, First(ColIDz) as ColID, First(Ownagez) as Ownage, First(SiteIDz) as SiteID, First(PremIDz) as PremID, First(prem_group1z) as prem_group1, First(prem_group2z) as prem_group2, First(ReadIDz) as ReadID, First(prem_addr1z) as prem_addr1 " & _ "INTO analyzed " & _ "FROM analyzedCopy2 " & _ "GROUP BY readings_miu_id, PremIDz; " DoCmd.SetWarnings False DoCmd.RunSQL strSql4 DoCmd.SetWarnings True

    Read the article

  • Crash when checking BOF property of pessimistic locked ADO recordset

    - by Patrick
    Bit of an odd one for you: I've got two connections to a database, on one I've opened a _RecordsetPtr with a pessimistic lock. I can no longer send an UPDATE command on the other connection. I can send a SELECT command on the second connection and data is returned. If I use a read only lock then there are no problems however when I use a pessimistic lock on the second connection as well I can check the State == adStateOpen but the program hangs when I test the BOF property! If I don't test the BOF property and try to call moveNext on the second connection the software hangs If I do neither of these I am able to access the data via the second connection but trying to access the data from the first connection causes the software to hang. Any one seen anything similar as I'm a bit stuck? EDIT : it wasn't hanging, someone had put a 30 minute timeout on the connection and I wasn't waiting that long while testing...

    Read the article

  • Postgres: Find table foreign keys (Faster alternative)

    - by Najera
    Is there faster alternative to this: Take almost 1 minute in our server. SELECT tc.constraint_name, tc.table_name, kcu.column_name, ccu.table_name AS foreign_table_name, ccu.column_name AS foreign_column_name FROM information_schema.table_constraints AS tc JOIN information_schema.key_column_usage AS kcu ON tc.constraint_name = kcu.constraint_name JOIN information_schema.constraint_column_usage AS ccu ON ccu.constraint_name = tc.constraint_name WHERE constraint_type = 'FOREIGN KEY' AND tc.table_name='mytable'; Maybe using pg_class metadata?, thanks.

    Read the article

  • Replace always replacing null values

    - by Mike
    Why does left(FIELD, replace(nullif(charindex('-', FIELD), 0), null, len(FIELD))) always return null? The idea behind the query is that if charindex() returns 0, then convert the results into null, then convert the null into the length of the field. So if '-' does not exist, show the whole string. For some reason it makes every row equal null. Thank you.

    Read the article

  • When running UPDATE ... datetime = NOW(); will all rows updated have the same date/time?

    - by Darryl Hein
    When you run something similar to: UPDATE table SET datetime = NOW(); on a table with 1 000 000 000 records and the query takes 10 seconds to run, will all the rows have the exact same time (minutes and seconds) or will they have different times? In other words, will the time be when the query started or when each row is updated? I'm running MySQL, but I'm thinking this applies to all dbs.

    Read the article

  • ora-30926 error

    - by user1331181
    I am trying to execute the following merge statement but is is showing me ora-30926 error merge into test_output target_table USING (SELECT c.test_code, c.v_report_id, upper_score, CASE WHEN c.test_code = 1 THEN b.mean_diff WHEN c.test_code = 2 THEN b.norm_dist WHEN c.test_code = 3 THEN b.ks_stats WHEN c.test_code = 4 THEN b.ginni WHEN c.test_code = 5 THEN b.auroc WHEN c.test_code = 6 THEN b.info_stats WHEN c.test_code = 7 THEN b.kl_stats END val1 FROM combined_approach b inner join test_output c on b.v_report_id = c.v_report_id and c.upper_score = b.band_code WHERE c.v_report_id = lv_report_id ORDER BY c.test_code) source_table on(target_table.v_report_id = source_table.v_report_id and target_table.v_report_id = lv_report_id) when matched then update SET target_table.upper_value = source_table.val1;

    Read the article

  • Fetch multiple rows from SQL in PHP foreach item in array

    - by TrySpace
    I try to request an array of IDs, to return each row with that ID, and push each into an Array $finalArray But only the first result from the Query will output, and at the second foreach, it skips the while loop. I have this working in another script, so I don't understand where it's going wrong. The $arrayItems is an array containing: "home, info" $finalArray = array(); foreach ($arrayItems as $UID_get) { $Query = "SELECT * FROM items WHERE (uid = '" . cleanQuery($UID_get) . "' ) ORDER BY uid"; if($Result = $mysqli->query($Query)) { print_r($UID_get); echo "<BR><-><BR>"; while ($Row = $Result->fetch_assoc()) { array_push($finalArray , $Row); print_r($finalArray ); echo "<BR><><BR>"; } } else { echo '{ "returned" : "FAIL" }'; //. mysqli_connect_errno() . ' ' . mysqli_connect_error() . "<BR>"; } } (the cleanQuery is to escape and stripslashes) What I'm trying to get is an array of multiple rows (after i json_encoded it, like: {"finalArray" : { "home": {"id":"1","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"red\" }"} }, { "info": {"id":"2","created":"0000-00-00 00:00:00","css":"{ \"background-color\" : \"blue\" }"} } } But that's after I get both, or more results from the db. the print_r($UID_get); does print info, but then nothing.. So, why am I not getting the second row from info? I am essentially re-querying foreach $arrayItem right?

    Read the article

< Previous Page | 883 884 885 886 887 888 889 890 891 892 893 894  | Next Page >