Search Results

Search found 38336 results on 1534 pages for 'sql wait types'.

Page 632/1534 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • How to dynamically choose two fields from a Linq query as a result

    - by Dr. Zim
    If you have a simple Linq query like: var result = from record in db.Customer select new { Text = record.Name, Value = record.ID.ToString() }; which is returning an object that can be mapped to a Drop Down List, is it possible to dynamically specify which fields map to Text and Value? Of course, you could do a big case (switch) statement, then code each Linq query separately but this isn't very elegant. What would be nice would be something like: (pseudo code) var myTextField = db.Customer["Name"]; // Could be an enumeration?? var myValueField = db.Customer["ID"]; // Idea: choose the field outside the query var result = from record in db.Customer select new { Text = myTextField, Value = myValueField };

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • speed up a sql query to mysql?

    - by fayer
    in my mysql database i've got the geonames database, containing all countries, states and cities. i am using this to create a cascading menu so the user could select where he is from: country - state - county - city. but the main problem is that the query will search through all the 7 millions rows in that table each time i want to get the list of children rows, and that is taking a while 10-15 seconds. i wonder how i could speed this up: caching? table views? reorganizing table structure somehow? and most important, how do i do these things? are there good tutorials you could link to me? i appreciate all help and feedback discussing smart ways of handling this issue!

    Read the article

  • sorting two tables (full join)

    - by Ruslan
    i'm joining tables like: select * from tableA a full join tableB b on a.id = b.id But the output should be: row without null fields row with null fields in tableB row with null fields in tableA Like: a.id a.name b.id b.name 5 Peter 5 Jones 2 Steven 2 Pareker 6 Paul null null 4 Ivan null null null null 1 Smith null null 3 Parker

    Read the article

  • Firing trigger for bulk insert

    - by Deepa
    ALTER TRIGGER [dbo].[TR_O_SALESMAN_INS] ON [dbo].[O_SALESMAN] AFTER INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for trigger here DECLARE @SLSMAN_CD NVARCHAR(20) DECLARE @SLSMAN_NAME NVARCHAR(20) SELECT @SLSMAN_CD = SLSMAN_CD,@SLSMAN_NAME=SLSMAN_NAME FROM INSERTED IF NOT EXISTS(SELECT * FROM O_SALESMAN_USER WHERE SLSMAN_CD = @SLSMAN_CD) BEGIN INSERT INTO O_SALESMAN_USER(SLSMAN_CD, PASSWORD, USER_CD) VALUES(@SLSMAN_CD, @SLSMAN_CD,@SLSMAN_NAME ) END END This is the trigger written for a table(O_SALESMAN) to fetch few columns from it and insert it into one another table(O_SALESMAN_USER). Presently bulk data is getting inserted into O_SALESMAN table through a stored procedure, where as the trigger is getting fired only once and O_SALESMAN_USER is having only one record inserted each time whenever the stored procedure is being executed,i want trigger to run after each and every record that gets inserted into O_SALESMAN such that both tables should have same count which is not happening..so please let me know what can be modified in this Trigger to achieve the same....

    Read the article

  • dataset not getting all the resultant tables i.e multiple tables are not being displayed in dataset

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified - works

    - by Matt
    Hello, I am developing a java app (with odbc bridge - forgive me - the only paradox driver I have been able to obtain is the microsoft odbc driver) - which works fine while in eclipse, (and netbeans) - connecting and obtaining data from an ancient paradox 5.x database. So long as it is run from inside my IDE - it compiles and runs flawlessly. When I export it to a runable jar, suddenly [code][Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified[/code] occurs. The jar is being run on the same box as my developing IDE - so I am confused about the cause. It is being run via console from a user account, as per the IDE. My connection string is "jdbc:odbc:Driver={Microsoft Paradox Driver (*.db )};DriverID=538; Fil=Paradox 5.X; DefaultDir=C:\paradox\database\location\" - obtained from connectionstrings.com - and as mentioned before, working fine while run from the IDE. The above seems to 'magically' create its own connection, avoiding the setup of a dsn - I am unsure quite how it does - but it works. The only other thing I can think that might be pertinent is that my PC is a 64bit o/s (windows server 2008). Please help, any suggestions or comments will be greatly appreciated. Thanks, Matt

    Read the article

  • SQL using with clause not working

    - by user1290467
    My question is why this query does not work? Cursor c = db.rawQuery("SELECT * FROM tbl_staff WHERE PMajor = '%" + spin.getSelectedItem().toString() + "%'", null); Cursor c: it is a cursor for handling my query tbl_staff: my table that consist of PName,PMajor,PCert spin: is spinner that has values which I need for my database query. When I use: if (c.moveToNext()) else (log.d("error query","couldn't do the query!");) It goes to else statement and moveToNext() doesn't work.

    Read the article

  • Trigger Code on a table in my ERP Database

    - by David Stein
    My ERP Vendor has the following trigger on a table: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TRIGGER [dbo].[SOItem_DeleteCheck] ON [dbo].[soitem] FOR DELETE AS BEGIN DECLARE @RecCnt int, @LogInfo varchar(256) SET @RecCnt = (SELECT COUNT(*) FROM deleted) IF @RecCnt > 150 BEGIN RAISERROR (54010, 18, 1, 'SOItem') WITH LOG ROLLBACK TRANSACTION END SET @LogInfo = 'Deleting ' + LTRIM(STR(@RecCnt)) + ' Rows From SOItem' EXEC LogDeletes @LogInfo END GO This seems very inefficient to me. Doesn't select count(*) take longer than Count(specific field)?

    Read the article

  • Unique Key in MySql

    - by Vinodtiru
    I have a table with four Columns. Col1, Col2, Col3, and Col4. Col1, Col2, Col3 is string and Col4 is a integer primary key with Auto Increment. Now my requirement is to have unique combination of Col2 and Col3. I mean to say like. Insert into table(Col1,Col2,Col3) Values('val1','val2','val3'); Insert into table(Col1,Col2,Col3) Values('val4','val2','val3'); the second statement has to throw error as the same combination of 'val2','val3' is present in the table. But i cant make it as a primary key as i need a auto increment column and for that matter the col4 has to be primary. Please let me know a approach by which i can have both in my table. Any kind of help is appreciated. Thanks.

    Read the article

  • How to compare sqlite TIMESTAMP values

    - by Roel
    I have an Sqlite database in which I want to select rows of which the value in a TIMESTAMP column is before a certain date. I would think this to be simple but I can't get it done. I have tried this: SELECT * FROM logged_event WHERE logged_event.CREATED_AT < '2010-05-28 16:20:55' and various variations on it, like with the date functions. I've read http://sqlite.org/lang_datefunc.html and http://www.sqlite.org/datatypes.html and I would expect that the column would be a numeric type, and that the comparison would be done on the unix timestamp value. Apparantly not. Anyone who can help? If it matters, I'm trying this out in Sqlite Expert Personal.

    Read the article

  • Newbie database index question

    - by RenderIn
    I have a table with multiple indexes, several of which duplicate the same columns: Index 1 columns: X, B, C, D Index 2 columns: Y, B, C, D Index 3 columns: Z, B, C, D I'm not very knowledgeable on indexing in practice, so I'm wondering if somebody can explain why X, Y and Z were paired with these same columns. B is an effective date. C is a semi-unique key ID for this table for a specific effective date B. D is a sequence that identifies the priority of this record for the identifier C. Why not just create 6 indexes, one for each X, Y, Z, B, C, D? I want to add an index to another column T, but in some contexts I'll only be querying on T alone while in others I will also be specifying the B, C and D columns... so should I create just one index like above or should I create one for T and one for (T, B, C, D)? I've not had as much luck as expected when googling for comprehensive coverage of indexing. Any resources where I can get a through explanation and lots of examples of B-tree indexing?

    Read the article

  • Random Page Cost and Planning

    - by Dave Jarvis
    A query (see below) that extracts climate data from weather stations within a given radius of a city using the dates for which those weather stations actually have data. The query uses the table's only index, rather effectively: CREATE UNIQUE INDEX measurement_001_stc_idx ON climate.measurement_001 USING btree (station_id, taken, category_id); Reducing the server's configuration value for random_page_cost from 2.0 to 1.1 had a massive performance improvement for the given range (nearly an order of magnitude) because it suggested to PostgreSQL that it should use the index. While the results now return in 5 seconds (down from ~85 seconds), problematic lines remain. Bumping the query's end date by a single year causes a full table scan: sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1997-12-31'::date AND How do I persuade PostgreSQL to use the indexes regardless of years between the two dates? (A full table scan against 43 million rows is probably not the best plan.) Find the EXPLAIN ANALYSE results below the query. Thank you! Query SELECT extract(YEAR FROM m.taken) AS year, avg(m.amount) AS amount FROM climate.city c, climate.station s, climate.station_category sc, climate.measurement m WHERE c.id = 5182 AND earth_distance( ll_to_earth(c.latitude_decimal,c.longitude_decimal), ll_to_earth(s.latitude_decimal,s.longitude_decimal)) / 1000 <= 30 AND s.elevation BETWEEN 0 AND 3000 AND s.applicable = TRUE AND sc.station_id = s.id AND sc.category_id = 1 AND sc.taken_start >= '1900-01-01'::date AND sc.taken_end <= '1996-12-31'::date AND m.station_id = s.id AND m.taken BETWEEN sc.taken_start AND sc.taken_end AND m.category_id = sc.category_id GROUP BY extract(YEAR FROM m.taken) ORDER BY extract(YEAR FROM m.taken) 1900 to 1996: Index "Sort (cost=1348597.71..1348598.21 rows=200 width=12) (actual time=2268.929..2268.935 rows=92 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1348586.56..1348590.06 rows=200 width=12) (actual time=2268.829..2268.886 rows=92 loops=1)" " -> Nested Loop (cost=0.00..1344864.01 rows=744510 width=12) (actual time=0.807..2084.206 rows=134893 loops=1)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (sc.station_id = m.station_id))" " -> Nested Loop (cost=0.00..12755.07 rows=1220 width=18) (actual time=0.502..521.937 rows=23 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.014..0.015 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Nested Loop (cost=0.00..9907.73 rows=3659 width=34) (actual time=0.014..28.937 rows=3458 loops=1)" " -> Seq Scan on station_category sc (cost=0.00..970.20 rows=3659 width=14) (actual time=0.008..10.947 rows=3458 loops=1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1996-12-31'::date) AND (category_id = 1))" " -> Index Scan using station_pkey1 on station s (cost=0.00..2.43 rows=1 width=20) (actual time=0.004..0.004 rows=1 loops=3458)" " Index Cond: (s.id = sc.station_id)" " Filter: (s.applicable AND (s.elevation >= 0) AND (s.elevation <= 3000))" " -> Append (cost=0.00..1072.27 rows=947 width=18) (actual time=6.996..63.199 rows=5865 loops=23)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.000..0.000 rows=0 loops=23)" " Filter: (m.category_id = 1)" " -> Bitmap Heap Scan on measurement_001 m (cost=20.79..1047.27 rows=941 width=18) (actual time=6.995..62.390 rows=5865 loops=23)" " Recheck Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" " -> Bitmap Index Scan on measurement_001_stc_idx (cost=0.00..20.55 rows=941 width=0) (actual time=5.775..5.775 rows=5865 loops=23)" " Index Cond: ((m.station_id = sc.station_id) AND (m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end) AND (m.category_id = 1))" "Total runtime: 2269.264 ms" 1900 to 1997: Full Table Scan "Sort (cost=1370192.26..1370192.76 rows=200 width=12) (actual time=86165.797..86165.809 rows=94 loops=1)" " Sort Key: (date_part('year'::text, (m.taken)::timestamp without time zone))" " Sort Method: quicksort Memory: 32kB" " -> HashAggregate (cost=1370181.12..1370184.62 rows=200 width=12) (actual time=86165.654..86165.736 rows=94 loops=1)" " -> Hash Join (cost=4293.60..1366355.81 rows=765061 width=12) (actual time=534.786..85920.007 rows=139721 loops=1)" " Hash Cond: (m.station_id = sc.station_id)" " Join Filter: ((m.taken >= sc.taken_start) AND (m.taken <= sc.taken_end))" " -> Append (cost=0.00..867005.80 rows=43670150 width=18) (actual time=0.009..79202.329 rows=43670079 loops=1)" " -> Seq Scan on measurement m (cost=0.00..25.00 rows=6 width=22) (actual time=0.001..0.001 rows=0 loops=1)" " Filter: (category_id = 1)" " -> Seq Scan on measurement_001 m (cost=0.00..866980.80 rows=43670144 width=18) (actual time=0.008..73312.008 rows=43670079 loops=1)" " Filter: (category_id = 1)" " -> Hash (cost=4277.93..4277.93 rows=1253 width=18) (actual time=534.704..534.704 rows=25 loops=1)" " -> Nested Loop (cost=847.87..4277.93 rows=1253 width=18) (actual time=415.837..534.682 rows=25 loops=1)" " Join Filter: ((sec_to_gc(cube_distance((ll_to_earth((c.latitude_decimal)::double precision, (c.longitude_decimal)::double precision))::cube, (ll_to_earth((s.latitude_decimal)::double precision, (s.longitude_decimal)::double precision))::cube)) / 1000::double precision) <= 30::double precision)" " -> Index Scan using city_pkey1 on city c (cost=0.00..2.47 rows=1 width=16) (actual time=0.012..0.014 rows=1 loops=1)" " Index Cond: (id = 5182)" " -> Hash Join (cost=847.87..1352.07 rows=3760 width=34) (actual time=6.427..35.107 rows=3552 loops=1)" " Hash Cond: (s.id = sc.station_id)" " -> Seq Scan on station s (cost=0.00..367.25 rows=7948 width=20) (actual time=0.004..23.529 rows=7949 loops=1)" " Filter: (applicable AND (elevation >= 0) AND (elevation <= 3000))" " -> Hash (cost=800.87..800.87 rows=3760 width=14) (actual time=6.416..6.416 rows=3552 loops=1)" " -> Bitmap Heap Scan on station_category sc (cost=430.29..800.87 rows=3760 width=14) (actual time=2.316..5.353 rows=3552 loops=1)" " Recheck Cond: (category_id = 1)" " Filter: ((taken_start >= '1900-01-01'::date) AND (taken_end <= '1997-12-31'::date))" " -> Bitmap Index Scan on station_category_station_category_idx (cost=0.00..429.35 rows=6376 width=0) (actual time=2.268..2.268 rows=6339 loops=1)" " Index Cond: (category_id = 1)" "Total runtime: 86165.936 ms"

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

  • excel:mysql: rs.Update crashes

    - by every_answer_gets_a_point
    i am connecting to mysql from excel and updating a table. as soon as i get to .update (rs.update) excel crashes. am i doing something wrong? Option Explicit Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim dpath, atime, rtime, lcalib, aname, rname, bstate, instrument As String Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable Worksheets.Item("Report 1").Select dpath = Trim(Range("B2").Text) atime = Trim(Range("B3").Text) rtime = Trim(Range("B4").Text) lcalib = Trim(Range("B5").Text) aname = Trim(Range("B6").Text) rname = Trim(Range("B7").Text) bstate = Trim(Range("B8").Text) ' instrument = GetInstrFromXML(wbBook.FullName) With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = bstate ' .Fields("instrument") = "NA" .Update ' stores the new record End With ' get the last id Set rs = oConn.Execute("SELECT @@identity", , adCmdText) 'MsgBox capture_id rs.Close Set rs = Nothing End With End Sub

    Read the article

  • How to avoid Cartesian product in an INNER JOIN query?

    - by flhe
    I have 6 tables, let's call them a,b,c,d,e,f. Now I want to search all the colums (except the ID columns) of all tables for a certain word, let's say 'Joe'. What I did was, I made INNER JOINS over all the tables and then used LIKE to search the columns. INNER JOIN ... ON INNER JOIN ... ON.......etc. WHERE a.firstname ~* 'Joe' OR a.lastname ~* 'Joe' OR b.favorite_food ~* 'Joe' OR c.job ~* 'Joe'.......etc. The results are correct, I get all the colums I was looking for. But I also get some kind of cartesian product, I get 2 or more lines with almost the same results. How can i avoid this? I want so have each line only once, since the results should appear on a web search.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • Sensible Doctrine Expression and Zend_Auth setCredentialTreatment()

    - by takeshin
    How to create reasonable expression to store password in database using Doctrine and Zend_Auth::setCredentialTreatment()? I don't want to use md5() and the code must be portable, and with salt. I would call this not easy one to guess: setCredentialTreatment("SHA1(CONCAT(username, SHA1(CONCAT(username, ?)))"); but it is not portable to all databases. Seems that Doctrine_Expression has only md5 expression portability.

    Read the article

  • performance issue in a select query from a single table

    - by daedlus
    Hi , I have a table as below dbo.UserLogs ------------------------------------- Id | UserId |Date | Name| P1 | Dirty ------------------------------------- There can be several records per userId[even in millions] I have clustered index on Date column and query this table very frequently in time ranges. The column 'Dirty' is non-nullable and can take either 0 or 1 only so I have no indexes on 'Dirty' I have several millions of records in this table and in one particular case in my application i need to query this table to get all UserId that have at least one record that is marked dirty. I tried this query - select distinct(UserId) from UserLogs where Dirty=1 I have 10 million records in total and this takes like 10min to run and i want this to run much faster than this. [i am able to query this table on date column in less than a minute.] Any comments/suggestion are welcome. my env 64bit,sybase15.0.3,Linux

    Read the article

  • how to evaluate query by DMBS?

    - by Kevinniceguy
    How do we evaluate the below database query by DBMS? the query is something like : SELECT SUM(price) FROM Room r, Hotel h WHERE r.hotelNo = h.hotelNo and hotelName = 'Paris Hilton' and roomNo NOT IN (SELECT roomNo FROM Booking b, Hotel h WHERE (dateFrom <= CURRENT_DATE AND dateTo = CURRENT_DATE) AND b.hotelNo = h.hotelNo AND hotelName = 'Paris Hilton');

    Read the article

  • ASP.NET MCV 2, re-use of SQL-Connection string

    - by cc0
    Hi, so I'm very very far from an expert on MVC or ASP.NET. I just want to make a few simple Controllers in C# at the moment, so I have the following question; Right now I have the connection string used by the controller, -inside- the controller itself. Which is kind of silly when there are multiple controllers using the same string. I'd like to be able to change the connection string in just one place and have it affect all controllers. Not knowing a lot about asp.net or the 'm' and 'v' part of MVC, what would be the best (and simplest) way of going about accomplishing just this? I'd appreciate any input on this, examples would be great too.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >