Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 700/2544 | < Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >

  • Optimizing Oracle query

    - by Omnipresent
    SELECT MAX(verification_id) FROM VERIFICATION_TABLE WHERE head = 687422 AND mbr = 23102 AND RTRIM(LTRIM(lname)) = '.iq bzw' AND TO_CHAR(dob,'MM/DD/YYYY')= '08/10/2004' AND system_code = 'M'; This query is taking 153 seconds to run. there are millions of rows in VERIFICATION_TABLE. I think query is taking long because of the functions in where clause. However, I need to do ltrim rtrim on the columns and also date has to be matched in MM/DD/YYYY format. How can I optimize this query?

    Read the article

  • How determine database server IP via UDP in Java

    - by Wolfgang Lenhard
    Hi, I am writing clients in Java that store the data on a database server. So far, the IP and Port of the server has to be specified in the client's settings manually. I have heard, that it is possible to automatically determine the IP of database servers via broadcast / multicast / UDP (I am not familiar with these concepts). Question: Is there a way to retrieve the IP adresses of all available database-servers in the local network? I am working with the h2 database system so far. Bye, Wolfgang

    Read the article

  • Trigger Code on a table in my ERP Database

    - by David Stein
    My ERP Vendor has the following trigger on a table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[SOItem_DeleteCheck] ON [dbo].[soitem] FOR DELETE AS BEGIN DECLARE @RecCnt int, @LogInfo varchar(256) SET @RecCnt = (SELECT COUNT(*) FROM deleted) IF @RecCnt > 150 BEGIN RAISERROR (54010, 18, 1, 'SOItem') WITH LOG ROLLBACK TRANSACTION END SET @LogInfo = 'Deleting ' + LTRIM(STR(@RecCnt)) + ' Rows From SOItem' EXEC LogDeletes @LogInfo END GO This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • How do You Get a Specific Value From a System.Data.DataTable Object?

    - by Giffyguy
    I'm a low-level algorithm programmer, and databases are not really my thing - so this'll be a n00b question if ever there was one. I'm running a simple SELECT query through our development team's DAO. The DAO returns a System.Data.DataTable object containing the results of the query. This is all working fine so far. The problem I have run into now: I need to pull a value out of one of the fields of the first row in the resulting DataTable - and I have no idea where to even start. Microsoft is so confusing about this! Arrrg! Any advice would be appreciated. I'm not providing any code samples, because I believe that context is unnecessary here. I'm assuming that all DataTable objects work the same way, no matter how you run your queries - and therefore any additional information would just make this more confusing for everyone.

    Read the article

  • MYSQL TRIGGER LOOP

    - by Lee
    Hey all I am going through the pain stacking process of sorting out someone else code. So I am decided to recreate a new database to sit alongside the old one then to use triggers to transfer data between both tables. Now I have an issue with a it looping IE A trigger on each table to update the other. Once one updates it should update the other but as both tables have triggers it just will loop which will cause an issue. Is their a way to stop this from happening ? Hope this makes sense and hope you can advise.

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • speed up a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • Websql to google maps markers

    - by Roy van Neden
    I am busy with my web application for a school project. It has has two pages. The first page uploads the location(latitude and longitude), price, date and kind of fuel. It works and i saved it with websql.(see screenshot) Now i want to get everything out of the web database and put it as a marker on my google maps card. I have my own location already. But i dont know how to get everything from the database to the map as a marker. I'm using jquery mobile/html5/css/javascript only. Code to put it in a array or something else that will work. db.transaction(function(tx){ tx.executeSql('SELECT brandstofsoort, literprijs, datum, latitude, longitude FROM brandstofstatus', [], function (tx, results) { var lengte = results.rows.length, i; for(var i = 0; i< lengte; i++){ var locations = [ [ ], [ ], [ ], [ ], [ ] ]; } // / for loop });// /tx.executeSql });// /db.transaction Thanks in advance!

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • Normalization in plain English

    - by Yada
    I sort of understand the concept of database normalization but always have a hard time explaining it in plain English especially for a job interview. I have read the wikipedia post, but still find it hard to explain the concept to none developers. "Design a database in a way not to get duplicated data" is the first thing that comes to mind. Does anyone was a nice way to explain the concept of database normalization in plain English. And what are some nice examples to show the differences between first, second and third normal forms. Say you go to a job interview and the person asks: Explain the concept of normalization and how would go about designing a normalized database. What key points are the interviewer looking for?

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • Where is the future of databases?

    - by Danny
    I'm a bit frustrated with my MySQL database at the moment, so I've been thinking about all the things I'd like to see in the database of the future. But I thought it would be fun to hear other people's thoughts too--I'm not a pro by any means.

    Read the article

  • How to avoid Cartesian product in an INNER JOIN query?

    - by flhe
    I have 6 tables, let's call them a,b,c,d,e,f. Now I want to search all the colums (except the ID columns) of all tables for a certain word, let's say 'Joe'. What I did was, I made INNER JOINS over all the tables and then used LIKE to search the columns. INNER JOIN ... ON INNER JOIN ... ON.......etc. WHERE a.firstname ~* 'Joe' OR a.lastname ~* 'Joe' OR b.favorite_food ~* 'Joe' OR c.job ~* 'Joe'.......etc. The results are correct, I get all the colums I was looking for. But I also get some kind of cartesian product, I get 2 or more lines with almost the same results. How can i avoid this? I want so have each line only once, since the results should appear on a web search.

    Read the article

  • view to select specific period or latest when null

    - by edosoft
    Hi I have a product table which simplifies to this: create table product(id int primary key identity, productid int, year int, quarter int, price money) and some sample data: insert into product select 11, 2010, 1, 1.11 insert into product select 11, 2010, 2, 2.11 insert into product select 11, 2010, 3, 3.11 insert into product select 12, 2010, 1, 1.12 insert into product select 12, 2010, 2, 2.12 insert into product select 13, 2010, 1, 1.13 Prices are can be changed each quarter, but not all products get a new price each quarter. Now I could duplicate the data each quarter, keeping the price the same, but I'd rather use a view. How can I create a view that can be used to return prices for (for example) quarter 2? I've written this to return the current (=latest) price: CREATE VIEW vwCurrentPrices AS SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY productid ORDER BY year DESC, quarter DESC) AS Ranking FROM product ) p WHERE p.Ranking = 1 I'd like to create a view so I can use queries like select * from vwProduct where quarter = 2

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • complex MySQL Order by not working

    - by Les Reynolds
    Here is the select statement I'm using. The problem happens with the sorting. When it is like below, it only sorts by t2.userdb_user_first_name, doesn't matter if I put that first or second. When I remove that, it sorts just fine by the displayorder field value pair. So I know that part is working, but somehow the combination of the two causes the first_name to override it. What I want is for the records to be sorted by displayorder first, and then first_name within that. SELECT t1.userdb_id FROM default_en_userdbelements as t1 INNER JOIN default_en_userdb AS t2 ON t1.userdb_id = t2.userdb_id WHERE t1.userdbelements_field_name = 'newproject' AND t1.userdbelements_field_value = 'no' AND t2.userdb_user_first_name!='Default' ORDER BY (t1.userdbelements_field_name = 'displayorder' AND t1.userdbelements_field_value), t2.userdb_user_first_name; Edit: here is what I want to accomplish. I want to list the users (that are not new projects) from the userdb table, along with the details about the users that is stored in userdbelements. And I want that to be sorted first by userdbelements.displayorder, then by userdb.first_name. I hope that makes sense? Thanks for the really quick help! Edit: Sorry for disappearing, here is some sample data userdbelements userdbelements_id userdbelements_field_name userdbelements_field_value userdb_id 647 heat 1 648 displayorder 1 - Sponsored 1 645 condofees 1 userdb userdb_id userdb_user_name userdb_emailaddress userdb_user_first_name userdb_user_last_name 10 harbourlights [email protected] Harbourlights 1237 Northshore Blvd, Burlington 11 harbourview [email protected] Harbourview 415 Locust Street, Burlington 12 thebalmoral [email protected] The Balmoral 2075 & 2085 Amherst Heights Drive, Burlington

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

< Previous Page | 696 697 698 699 700 701 702 703 704 705 706 707  | Next Page >