Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 624/1278 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Where is the future of databases?

    - by Danny
    I'm a bit frustrated with my MySQL database at the moment, so I've been thinking about all the things I'd like to see in the database of the future. But I thought it would be fun to hear other people's thoughts too--I'm not a pro by any means.

    Read the article

  • speed up a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • MYSQL TRIGGER LOOP

    - by Lee
    Hey all I am going through the pain stacking process of sorting out someone else code. So I am decided to recreate a new database to sit alongside the old one then to use triggers to transfer data between both tables. Now I have an issue with a it looping IE A trigger on each table to update the other. Once one updates it should update the other but as both tables have triggers it just will loop which will cause an issue. Is their a way to stop this from happening ? Hope this makes sense and hope you can advise.

    Read the article

  • SqlCeResultSet re-use

    - by pdiddy
    Can I reuse an existing result set to filter? Lets say I have a result set that contains all data. Then I want to filter that resultset. Can I just execute the sqlcommand on the same resultset? I tried and it didn't seem to work. Does someone know the reason? So the only way to make it work was to create a new resultset and execute the command on that one. So this way I'm always creating a new resulset, can this have some issue?

    Read the article

  • Teradata group by time interval

    - by matt123
    Can anyonen help with Teradata? I want to create a query that is a standard select count(*) from Table where Column = Something but has a group by time period done by 5 minute time intervals the time column is in 'Time' format any idea?

    Read the article

  • Trigger Code on a table in my ERP Database

    - by David Stein
    My ERP Vendor has the following trigger on a table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[SOItem_DeleteCheck] ON [dbo].[soitem] FOR DELETE AS BEGIN DECLARE @RecCnt int, @LogInfo varchar(256) SET @RecCnt = (SELECT COUNT(*) FROM deleted) IF @RecCnt > 150 BEGIN RAISERROR (54010, 18, 1, 'SOItem') WITH LOG ROLLBACK TRANSACTION END SET @LogInfo = 'Deleting ' + LTRIM(STR(@RecCnt)) + ' Rows From SOItem' EXEC LogDeletes @LogInfo END GO This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?

    Read the article

  • view to select specific period or latest when null

    - by edosoft
    Hi I have a product table which simplifies to this: create table product(id int primary key identity, productid int, year int, quarter int, price money) and some sample data: insert into product select 11, 2010, 1, 1.11 insert into product select 11, 2010, 2, 2.11 insert into product select 11, 2010, 3, 3.11 insert into product select 12, 2010, 1, 1.12 insert into product select 12, 2010, 2, 2.12 insert into product select 13, 2010, 1, 1.13 Prices are can be changed each quarter, but not all products get a new price each quarter. Now I could duplicate the data each quarter, keeping the price the same, but I'd rather use a view. How can I create a view that can be used to return prices for (for example) quarter 2? I've written this to return the current (=latest) price: CREATE VIEW vwCurrentPrices AS SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (PARTITION BY productid ORDER BY year DESC, quarter DESC) AS Ranking FROM product ) p WHERE p.Ranking = 1 I'd like to create a view so I can use queries like select * from vwProduct where quarter = 2

    Read the article

  • How determine database server IP via UDP in Java

    - by Wolfgang Lenhard
    Hi, I am writing clients in Java that store the data on a database server. So far, the IP and Port of the server has to be specified in the client's settings manually. I have heard, that it is possible to automatically determine the IP of database servers via broadcast / multicast / UDP (I am not familiar with these concepts). Question: Is there a way to retrieve the IP adresses of all available database-servers in the local network? I am working with the h2 database system so far. Bye, Wolfgang

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • Row as XML in SP

    - by user171523
    From SP i need to get a row as as XML (Includeing all fileds.) Is there any way we can get like below. Declare @xmlMsg varchar(4000) select * into #tempTable from dbo.order for xml raw select @xmlMsg = 1 from #tempTable print '@xmlMsg' + @xmlMsg Row i would like to get it as XML output.

    Read the article

  • What is C# equivalent of PHP's mysql_fetch_array function?

    - by Mike Biff
    I am learning C#/ASP.NET and I am wondering what the C# equivalent of the following PHP code is? I know the userid, and I want to fetch the rows from this table into the array of the variable "row", so I then can use it as "row['name']" and "row['email']. $result = mysql_query("SELECT email, name FROM mytable WHERE id=7"); while ($row = mysql_fetch_array($result, MYSQL_ASSOC)) { printf("Email: %s Name: %s", $row["email"], $row["name"]); } Thanks.

    Read the article

  • Newbie database index question

    - by RenderIn
    I have a table with multiple indexes, several of which duplicate the same columns: Index 1 columns: X, B, C, D Index 2 columns: Y, B, C, D Index 3 columns: Z, B, C, D I'm not very knowledgeable on indexing in practice, so I'm wondering if somebody can explain why X, Y and Z were paired with these same columns. B is an effective date. C is a semi-unique key ID for this table for a specific effective date B. D is a sequence that identifies the priority of this record for the identifier C. Why not just create 6 indexes, one for each X, Y, Z, B, C, D? I want to add an index to another column T, but in some contexts I'll only be querying on T alone while in others I will also be specifying the B, C and D columns... so should I create just one index like above or should I create one for T and one for (T, B, C, D)? I've not had as much luck as expected when googling for comprehensive coverage of indexing. Any resources where I can get a through explanation and lots of examples of B-tree indexing?

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • Validate Linq2Sql before SubmitChanges()

    - by Nick Gotch
    Can anyone tell me if/how you can validate the changes in a data context in Linq2Sql before calling SubmitChanges(). The situation I have is that I create a context, perform multiple operations and add many inserts alongside other processing tasks and then rollback if the submit fails. What I'd prefer to do is make some kind of "Validate()" call after certain tasks are done so that I can handle it before submitting the entire job.

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >