Search Results

Search found 35019 results on 1401 pages for 'sql documentation'.

Page 609/1401 | < Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >

  • How to avoid Cartesian product in an INNER JOIN query?

    - by flhe
    I have 6 tables, let's call them a,b,c,d,e,f. Now I want to search all the colums (except the ID columns) of all tables for a certain word, let's say 'Joe'. What I did was, I made INNER JOINS over all the tables and then used LIKE to search the columns. INNER JOIN ... ON INNER JOIN ... ON.......etc. WHERE a.firstname ~* 'Joe' OR a.lastname ~* 'Joe' OR b.favorite_food ~* 'Joe' OR c.job ~* 'Joe'.......etc. The results are correct, I get all the colums I was looking for. But I also get some kind of cartesian product, I get 2 or more lines with almost the same results. How can i avoid this? I want so have each line only once, since the results should appear on a web search.

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • Making a DateTime field in a database automatic?

    - by Mike
    I'm putting together a simple test database to learn MVC with. I want to add a DateTime field to show when the record was CREATED. ID = int Name = Char DateCreated = (dateTime, DateTime2..?) I have a feeling that this type of DateTime capture can be done automatically - but that's all I have, a feeling. Can it be done? And if so how? While we're on the subject: if I wanted to include another field that captured the DateTime of when the record was LAST UPDATED how would I do that. I'm hoping to not do this manually. Many thanks Mike

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • tsql replace value on select

    - by Zelter Ady
    I have a column (SERVICE_ID) in my table where I can have only 3 values: 0, 1 and 2. I'd like on select, on displayed result table, to change those value with some english words. select client, SERVICE_ID from customers displays now: -------------------------- | John | 1 | Mike | 0 | Jordan | 1 | Oren | 2 -------------------------- I'd like to change the query to get: -------------------------- | John | QA | Mike | development | Jordan | QA | Oren | management -------------------------- There is any way to do this using only the select?

    Read the article

  • Websql to google maps markers

    - by Roy van Neden
    I am busy with my web application for a school project. It has has two pages. The first page uploads the location(latitude and longitude), price, date and kind of fuel. It works and i saved it with websql.(see screenshot) Now i want to get everything out of the web database and put it as a marker on my google maps card. I have my own location already. But i dont know how to get everything from the database to the map as a marker. I'm using jquery mobile/html5/css/javascript only. Code to put it in a array or something else that will work. db.transaction(function(tx){ tx.executeSql('SELECT brandstofsoort, literprijs, datum, latitude, longitude FROM brandstofstatus', [], function (tx, results) { var lengte = results.rows.length, i; for(var i = 0; i< lengte; i++){ var locations = [ [ ], [ ], [ ], [ ], [ ] ]; } // / for loop });// /tx.executeSql });// /db.transaction Thanks in advance!

    Read the article

  • Formatting datetime values returned in a SELECT..FOR XML statement

    - by TelJanini
    Consider the following table: Orders OrderId Date CustomerId 1000 2012-06-05 20:03:12.000 51 1001 2012-06-16 12:02:31.170 48 1002 2012-06-18 19:45:16.000 33 When I extract the Order data using FOR XML: SELECT OrderId AS 'Order/@Order-Id', Date AS 'Order/ShipDate', CustomerId AS 'Order/Customer' FROM Orders WHERE OrderId = 1000 FOR XML PATH ('') I get the following result: <Order Order-Id="1000"> <ShipDate>2010-02-20T16:03:12</ShipDate> <Customer>51</Customer> </Order> The problem is, the ShipDate value in the XML file needs to be in the format M/DD/YYYY H:mm:ss PM. How can I change the output of the ShipDate in the XML file to the desired format? Any help would be greatly appreciated!

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • How to ignore blank elements in linq query

    - by Maestro1024
    How to ignore blank elements in linq query I have a linq query var usersInDatabase = from user in licenseUserTable where user.FirstName == first_name && user.LastName == last_name select user; But if I get here and first_name or last_name is blank then I want to still evaluate the other data item.

    Read the article

  • Trigger Code on a table in my ERP Database

    - by David Stein
    My ERP Vendor has the following trigger on a table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[SOItem_DeleteCheck] ON [dbo].[soitem] FOR DELETE AS BEGIN DECLARE @RecCnt int, @LogInfo varchar(256) SET @RecCnt = (SELECT COUNT(*) FROM deleted) IF @RecCnt > 150 BEGIN RAISERROR (54010, 18, 1, 'SOItem') WITH LOG ROLLBACK TRANSACTION END SET @LogInfo = 'Deleting ' + LTRIM(STR(@RecCnt)) + ' Rows From SOItem' EXEC LogDeletes @LogInfo END GO This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?

    Read the article

  • Isn't INT more efficient than UNIQUEIDENTIFIER?

    - by ck
    I have a parent table and child table where the columns that join them together are the UNIQUEIDENTIFIER type. The child table has a clustered index on the column that joins it to the parent table (its PK, which is also clustered). I have created a copy of both of these tables but changed the relationship columns to be INTs instead, have rebuilt the indexes so that they are essentially the same structure and can be queried in the same way. When I query for a known 20 records from the parent table, pulling in all the related records from the child tables, I get identical query costs across both, i.e. 50/50 cost for the batches. If this is true, then my giant project to change all of the tables like this appears to be pointless, other than speeding up inserts. Can anyone provide any light on the situation?

    Read the article

  • Optimize this MySQL query?

    - by HipHop-opatamus
    The following query takes FOREVER to execute (30+ hrs on a Macbook w/4gig ram) - I'm looking for ways to make it run more efficiently. Any thoughts are appreciated! CREATE TABLE fc AS SELECT threadid, title, body, date, userlogin FROM f WHERE pid NOT IN (SELECT pid FROM ft) ORDER BY date; (table "f" is ~1 Gig / 1,843,000 row, table "ft" is 168mb, 216,000 rows) )

    Read the article

  • Unique Key in MySql

    - by Vinodtiru
    I have a table with four Columns. Col1, Col2, Col3, and Col4. Col1, Col2, Col3 is string and Col4 is a integer primary key with Auto Increment. Now my requirement is to have unique combination of Col2 and Col3. I mean to say like. Insert into table(Col1,Col2,Col3) Values('val1','val2','val3'); Insert into table(Col1,Col2,Col3) Values('val4','val2','val3'); the second statement has to throw error as the same combination of 'val2','val3' is present in the table. But i cant make it as a primary key as i need a auto increment column and for that matter the col4 has to be primary. Please let me know a approach by which i can have both in my table. Any kind of help is appreciated. Thanks.

    Read the article

  • Best Practices - Stored Procedure Logging

    - by hgulyan
    If you have a long running SP, do you log somehow it's actions or just wait for this message? "Command(s) completed successfully." I assume, that there can be plenty solutions on this subject, but is there any best practice - a simple solution that is frequently used? EDIT I've found an interesting link on this subject http://weblogs.sqlteam.com/brettk/archive/2006/09/21/12391.aspx Article describes using a log table, but there's an issue The logging procedure must be executed outside of any transaction I can't call that insert outside, because of cursor that I use and insert a line to that table on every row. Any ideas?

    Read the article

  • Override delete behaviour in NHibernate

    - by David
    Hi all In my application users cannot truly delete records. Rather, the record's Deleted field gets set to 1, which hides it from selects. I need to maintain this behaviour and I'm looking into whether NHibernate is appropriate for my app. Can I override NHibnernate's delete behaviour so that instead of issuing DELETE statements, it issues UPDATES, as described above? I would obviously also need to override its SELECT behaviour to include the 'AND Deleted = 0' clause. Or read from a view instead. I dunno. TIA for your advice. David

    Read the article

  • complex MySQL Order by not working

    - by Les Reynolds
    Here is the select statement I'm using. The problem happens with the sorting. When it is like below, it only sorts by t2.userdb_user_first_name, doesn't matter if I put that first or second. When I remove that, it sorts just fine by the displayorder field value pair. So I know that part is working, but somehow the combination of the two causes the first_name to override it. What I want is for the records to be sorted by displayorder first, and then first_name within that. SELECT t1.userdb_id FROM default_en_userdbelements as t1 INNER JOIN default_en_userdb AS t2 ON t1.userdb_id = t2.userdb_id WHERE t1.userdbelements_field_name = 'newproject' AND t1.userdbelements_field_value = 'no' AND t2.userdb_user_first_name!='Default' ORDER BY (t1.userdbelements_field_name = 'displayorder' AND t1.userdbelements_field_value), t2.userdb_user_first_name; Edit: here is what I want to accomplish. I want to list the users (that are not new projects) from the userdb table, along with the details about the users that is stored in userdbelements. And I want that to be sorted first by userdbelements.displayorder, then by userdb.first_name. I hope that makes sense? Thanks for the really quick help! Edit: Sorry for disappearing, here is some sample data userdbelements userdbelements_id userdbelements_field_name userdbelements_field_value userdb_id 647 heat 1 648 displayorder 1 - Sponsored 1 645 condofees 1 userdb userdb_id userdb_user_name userdb_emailaddress userdb_user_first_name userdb_user_last_name 10 harbourlights [email protected] Harbourlights 1237 Northshore Blvd, Burlington 11 harbourview [email protected] Harbourview 415 Locust Street, Burlington 12 thebalmoral [email protected] The Balmoral 2075 & 2085 Amherst Heights Drive, Burlington

    Read the article

  • Use of HAVING in MySQL

    - by KBrian
    I have a table from which I need to select all persons that have a first name that is not unique and that that set should be selected only if among the persons with a similar first name, all have a different last name. Example: FirstN LastN Bill Clinton Bill Cosby Bill Maher Elvis Presley Elvis Presley Largo Winch I want to obtain FirstN LastN Bill Clinton or FirstN LastN Bill Clinton Bill Cosby Bill Maher I tried this but it does not return what I want. SELECT * FROM Ids GROUP BY FirstN, LastN HAVING (COUNT(FirstN)>1 AND COUNT(LastN)=1)) [Edited my post after Aleandre P. Lavasseur remark]

    Read the article

  • Normalization in plain English

    - by Yada
    I sort of understand the concept of database normalization but always have a hard time explaining it in plain English especially for a job interview. I have read the wikipedia post, but still find it hard to explain the concept to none developers. "Design a database in a way not to get duplicated data" is the first thing that comes to mind. Does anyone was a nice way to explain the concept of database normalization in plain English. And what are some nice examples to show the differences between first, second and third normal forms. Say you go to a job interview and the person asks: Explain the concept of normalization and how would go about designing a normalized database. What key points are the interviewer looking for?

    Read the article

  • How do You Get a Specific Value From a System.Data.DataTable Object?

    - by Giffyguy
    I'm a low-level algorithm programmer, and databases are not really my thing - so this'll be a n00b question if ever there was one. I'm running a simple SELECT query through our development team's DAO. The DAO returns a System.Data.DataTable object containing the results of the query. This is all working fine so far. The problem I have run into now: I need to pull a value out of one of the fields of the first row in the resulting DataTable - and I have no idea where to even start. Microsoft is so confusing about this! Arrrg! Any advice would be appreciated. I'm not providing any code samples, because I believe that context is unnecessary here. I'm assuming that all DataTable objects work the same way, no matter how you run your queries - and therefore any additional information would just make this more confusing for everyone.

    Read the article

  • Relational database data explorer / visualization?

    - by Ian Boyd
    Is there a tool that can let one browse relational data as a graph of connected nodes? For example, i'm faced with trying to cleanse some anomolous data. i can start with two offending rows. In this particular example, the TransactionID should, by business rules, be unique to the table, but i find a transaction that violates that rule: SELECT * FROM LCTTrans WHERE TransactionID = 1075048 LCTID TransactionID ========= ============= 4358 1075048 4359 1075048 2 row(s) affected But really what i want to begin to hunt down all the related data, to try to see which is right. So this hypothetical software would start by showing me these two rows: Next, i want to see that transaction that is linked into this table: Now that transaction points to an MAL, so show me that: Now lets add those two LCTs, that the transaction is "on". A transaction can be on only one LCT, yet this one is pointing to two: Okay computer, both of those LCTs point to an MAL and the transaction that created them, show me those: Those last two transactions, they also point at an MAL, and they themselves point to an LCT, show me those: Okay, now are there any entries in LCTTrans that point to LCTs 4358 or 4359?... And so on, and so on. Now i did all this manually, running single selects, copying and pasting uniqueidentifier keys and converting them into friendly id numbers so i could easily see the relationships. Is there software that can do this?

    Read the article

< Previous Page | 605 606 607 608 609 610 611 612 613 614 615 616  | Next Page >