Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 639/1278 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • DataContext Refresh and PropertyChanging & PropertyChanged Events

    - by Scott
    I'm in a situation where I am being informed from an outside source that a particular entity has been altered outside my current datacontext. I'm able to find the entity and call refresh like so MyDataContext.Refresh(RefreshMode.OverwriteCurrentValues, myEntity); and the properties which have been altered on the entity are updated correctly. However neither of the INotifyPropertyChanging INotifyPropertyChanged appear to be raised when the refresh occurs and this leaves my UI displaying incorrect information. I'm aware that Refresh() fails to use the correct property getters and setters on the entity to raise the change notification events, but perhaps there is another way to accomplish the same thing? Am I doing something wrong? Is there a better method than Refresh? If Refresh is the only option, does anyone have a work around?

    Read the article

  • Can't find which row is causing conversion error

    - by Marwan
    I have the following table: CREATE TABLE [dbo].[Accounts1]( [AccountId] [nvarchar](50) NULL, [ExpiryDate] [nvarchar](50) NULL ) I am trying to convert nvarchar to datetime using this query: select convert(datetime, expirydate) from accounts I get this error: Conversion failed when converting datetime from character string. The status bar says "2390 rows". I go to rows 2390, 2391 and 2392. There is nothing wrong with the data there. I even try to convert those particular rows and it works. How can I find out which row(s) is causing the conversion error?

    Read the article

  • Transactional isolation level needed for safely incrementing ids

    - by Knut Arne Vedaa
    I'm writing a small piece of software that is to insert records into a database used by a commercial application. The unique primary keys (ids) in the relevant table(s) are sequential, but does not seem to be set to "auto increment". Thus, I assume, I will have to find the largest id, increment it and use that value for the record I'm inserting. In pseudo-code for brevity: id = select max(id) from some_table id++ insert into some_table values(id, othervalues...) Now, if another thread started the same transaction before the first one finished its insert, you would get two identical ids and a failure when trying to insert the last one. You could check for that failure and retry, but a simpler solution might be setting an isolation level on the transaction. For this, would I need SERIALIZABLE or a lower level? Additionally, is this, generally, a sound way of solving the problem? Are the any other ways of doing it?

    Read the article

  • Should I include user_id in multiple tables?

    - by Drarok
    I'm at the planning stages of a multi-user application where each user will only have access their own data. There'll be a few tables that relate to each other, so I could use JOINs to ensure they're accessing only their data, but should I include user_id in each table? Would this be faster? It would certainly make some of the queries easier in the long run. Thanks!

    Read the article

  • comparing data via a function

    - by tigermain
    I have two sets of data (locations) in seperate tables and I need to compare if they match or not. I have a UDF which performs a calculation based upon 5 values from each table. How do I perform a select with a join using this udf? my udf is basically defined by.... ALTER FUNCTION [dbo].[MatchRanking] ( @Latitude FLOAT , @Longitude FLOAT , @Postcode VARCHAR(16) , @CompanyName VARCHAR(256) , @TelephoneNumber VARCHAR(32) , @Latitude2 FLOAT , @Longitude2 FLOAT , @Postcode2 VARCHAR(16) , @CompanyName2 VARCHAR(256) , @TelephoneNumber2 VARCHAR(32) ) RETURNS INT

    Read the article

  • Data Modeling of Entity with Attributes

    - by StackOverflowNewbie
    I'm storing some very basic information "data sources" coming into my application. These data sources can be in the form of a document (e.g. PDF, etc.), audio (e.g. MP3, etc.) or video (e.g. AVI, etc.). Say, for example, I am only interested in the filename of the data source. Thus, I have the following table: DataSource Id (PK) Filename For each data source, I also need to store some of its attributes. Example for a PDF would be "numbe of pages." Example for audio would be "bit rate." Example for video would be "duration." Each DataSource will have different requirements for the attributes that need to be stored. So, I have modeled "data source attribute" this way: DataSourceAttribute Id (PK) DataSourceId (FK) Name Value Thus, I would have records like these: DataSource->Id = 1 DataSource->Filename = 'mydoc.pdf' DataSource->Id = 2 DataSource->Filename = 'mysong.mp3' DataSource->Id = 3 DataSource->Filename = 'myvideo.avi' DataSourceAttribute->Id = 1 DataSourceAttribute->DataSourceId = 1 DataSourceAttribute->Name = 'TotalPages' DataSourceAttribute->Value = '10' DataSourceAttribute->Id = 2 DataSourceAttribute->DataSourceId = 2 DataSourceAttribute->Name = 'BitRate' DataSourceAttribute->Value '16' DataSourceAttribute->Id = 3 DataSourceAttribute->DataSourceId = 3 DataSourceAttribute->Name = 'Duration' DataSourceAttribute->Value = '1:32' My problem is that this doesn't seem to scale. For example, say I need to query for all the PDF documents along with thier total number of pages: Filename, TotalPages 'mydoc.pdf', '10' 'myotherdoc.pdf', '23' ... The JOINs needed to produce the above result is just too costly. How should I address this problem?

    Read the article

  • PIVOT / UNPIVOT IN SQL 2008

    - by Nev_Rahd
    Hello I got child / parent tables as below. MasterTable: MasterID, Description ChildTable ChildID, MasterID, Description. Using PIVOT / UNPIVOT how can i get result as below in single row. if (MasterID : 1 got x child records) MasterID, ChildID1, Description1, ChildID2, Description2....... ChildIDx, Descriptionx Thanks

    Read the article

  • SQLite long to wide formats?

    - by Stephen
    Hi, I wonder if there is a canonical way to convert data from long to wide format in SQLite (is that operation usually in the domain of relational databases?). I tried to follow this example for MySQL but I guess SQLite does not have the same IF construct... Thanks!

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • Access 2007 file picker, replaces all rows with the same choice.

    - by SqlStruggle
    This code is from an Access 2007 project I've been struggling with. The actual mean part is the part where I should put something like "update only current form" DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Could someone please help me with this?` If I use it now, it updates all the tables with the same file picked. Heres the whole code. ' This requires a reference to the Microsoft Office 11.0 Object Library. Dim fDialog As Office.FileDialog Dim varFile As Variant Dim filePath As String ' Set up the File dialog box. Set fDialog = Application.FileDialog(msoFileDialogFilePicker) With fDialog ' Allow the user to make multiple selections in the dialog box. .AllowMultiSelect = False ' Set the title of the dialog box. .Title = "Valitse Tiedosto" ' Clear out the current filters, and then add your own. .Filters.Clear .Filters.Add "All Files", "*.*" ' user picked at least one file. If the .Show method returns ' False, the user clicked Cancel. If .Show = True Then ' Loop through each file that is selected and then add it to the list box. For Each varFile In .SelectedItems DoCmd.SetWarnings True DoCmd.RunSQL "Update Korut Set [PikkuKuva]=('" & varFile & "') ;" Next Else MsgBox "You clicked Cancel in the file dialog box." End If End With

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • mysql result set joining existing table

    - by Yang
    is there any way to avoid using tmp table? I am using a query with aggregate function (sum) to generate the sum of each product: the result looks like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_5 | 300 now i want to join the above result to another table called products. so that i will have a summary like this: product_name | sum(qty) product_1 | 100 product_2 | 200 product_3 | 0 product_4 | 0 product_5 | 300 i know 1 way of doing this is the dump the 1st query result to a temp table then join it with products table. is there a better way?

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • SQL query to get latest record for all distinct items in a table

    - by David Buckley
    I have a table of all sales defined like: mysql> describe saledata; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | SaleDate | datetime | NO | | NULL | | | StoreID | bigint(20) unsigned | NO | | NULL | | | Quantity | int(10) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+ I need to get the last sale price for all items (as the price may change). I know I can run a query like: SELECT price FROM saledata WHERE itemID = 1234 AND storeID = 111 ORDER BY saledate DESC LIMIT 1 However, I want to be able to get the last sale price for all items (the ItemIDs are stored in a separate item table) and insert them into a separate table. How can I get this data? I've tried queries like this: SELECT storeID, itemID, price FROM saledata WHERE itemID IN (SELECT itemID from itemmap) ORDER BY saledate DESC LIMIT 1 and then wrap that into an insert, but it's not getting the proper data. Is there one query I can run to get the last price for each item and insert that into a table defined like: mysql> describe lastsale; +-------------------+---------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------------------+------+-----+---------+-------+ | StoreID | bigint(20) unsigned | NO | | NULL | | | Price | decimal(19,4) | NO | | NULL | | | ItemID | bigint(20) unsigned | NO | | NULL | | +-------------------+---------------------+------+-----+---------+-------+

    Read the article

  • mysql query for getting all messages that belong to user's contacts

    - by aharon
    So I have a database that is setup sort of like this (simplified, and in terms of the tables, all are InnoDBs): Users: contains based user authentication information (uid, username, encrypted password, et cetera) Contacts: contains two rows per relationship that exists between users as (uid1, uid2), (uid2, uid1) to allow for a good 1:1 relationship (must be mutual) between users Messages: has messages that consist of a blob, owner-id, message-id (auto_increment) So my question is, what's the best MySQL query to get all messages that belong to all the contacts of a specific user? Is there an efficient way to do this?

    Read the article

  • Linq2SQL vs NHibernate performance (have I gone mad?)

    - by HeavyWave
    I have written the following tests to compare performance of Linq2SQL and NHibernate and I find results to be somewhat strange. Mappings are straight forward and identical for both. Both are running against a live DB. Although I'm not deleting Campaigns in case of Linq, but that shouldn't affect performance by more than 10 ms. Linq: [Test] public void Test1000ReadsWritesToAgentStateLinqPrecompiled() { Stopwatch sw = new Stopwatch(); Stopwatch swIn = new Stopwatch(); sw.Start(); for (int i = 0; i < 1000; i++) { swIn.Reset(); swIn.Start(); ReadWriteAndDeleteAgentStateWithLinqPrecompiled(); swIn.Stop(); Console.WriteLine("Run ReadWriteAndDeleteAgentState: " + swIn.ElapsedMilliseconds + " ms"); } sw.Stop(); Console.WriteLine("Total Time: " + sw.ElapsedMilliseconds + " ms"); Console.WriteLine("Average time to execute queries: " + sw.ElapsedMilliseconds / 1000 + " ms"); } private static readonly Func<AgentDesktop3DataContext, int, EntityModel.CampaignDetail> GetCampaignById = CompiledQuery.Compile<AgentDesktop3DataContext, int, EntityModel.CampaignDetail>( (ctx, sessionId) => (from cd in ctx.CampaignDetails join a in ctx.AgentCampaigns on cd.CampaignDetailId equals a.CampaignDetailId where a.AgentStateId == sessionId select cd).FirstOrDefault()); private void ReadWriteAndDeleteAgentStateWithLinqPrecompiled() { int id = 0; using (var ctx = new AgentDesktop3DataContext()) { EntityModel.AgentState agentState = new EntityModel.AgentState(); var campaign = new EntityModel.CampaignDetail { CampaignName = "Test" }; var campaignDisposition = new EntityModel.CampaignDisposition { Code = "123" }; campaignDisposition.Description = "abc"; campaign.CampaignDispositions.Add(campaignDisposition); agentState.CallState = 3; campaign.AgentCampaigns.Add(new AgentCampaign { AgentState = agentState }); ctx.CampaignDetails.InsertOnSubmit(campaign); ctx.AgentStates.InsertOnSubmit(agentState); ctx.SubmitChanges(); id = agentState.AgentStateId; } using (var ctx = new AgentDesktop3DataContext()) { var dbAgentState = ctx.GetAgentStateById(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); var campaignDetails = GetCampaignById(ctx, id); Assert.AreEqual(campaignDetails.CampaignDispositions[0].Description, "abc"); } using (var ctx = new AgentDesktop3DataContext()) { ctx.DeleteSessionById(id); } } NHibernate (the loop is the same): private void ReadWriteAndDeleteAgentState() { var id = WriteAgentState().Id; StartNewTransaction(); var dbAgentState = agentStateRepository.Get(id); Assert.IsNotNull(dbAgentState); Assert.AreEqual(dbAgentState.CallState, 3); Assert.AreEqual(dbAgentState.Campaigns[0].Dispositions[0].Description, "abc"); var campaignId = dbAgentState.Campaigns[0].Id; agentStateRepository.Delete(dbAgentState); NHibernateSession.Current.Transaction.Commit(); Cleanup(campaignId); NHibernateSession.Current.BeginTransaction(); } Results: NHibernate: Total Time: 9469 ms Average time to execute 13 queries: 9 ms Linq: Total Time: 127200 ms Average time to execute 13 queries: 127 ms Linq lost by 13.5 times! Event with precompiled queries (both read queries are precompiled). This can't be right, although I expected NHibernate to be faster, this is just too big of a difference, considering mappings are identical and NHibernate actually executes more queries against the DB.

    Read the article

  • many-to-many query

    - by kofto4ka
    Hello, guys! I have a problem and I dont know what is better solution. Okay, I have 2 tables: posts(id, title), posts_tags(post_id, tag_id). I have next task: must select posts with tags ids for example 4, 10 and 11. Not exactly, post could have any other tags at the same time. So, how I could do it more optimized? Creating temporary table in each query? Or may be some kind of stored procedure? In the future, user could ask script to select posts with any count of tags (it could be 1 tag only or 10 at the same time) and I must be sure that method that I will choose would be the best method for my problem. Sorry for my english, thx for attention.

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • Stored Procedure in Entity Framework

    - by kamal
    Hi I had added the Stored procedure in my Entity framework and i also imported the functions in the edmx. Is it must to add all the three functions insert, update, and delete functions to a table. I had tried with insert alone and also with all, but why can't i get the name of the stored procedure in the connection string. Let me know what i done clearly. I had added the sp i had imported the functions in the model browser. i had also mapped the insert, update and delete function to the table with return type only for insert and update. Still i can't get the name of SP in the connection string. Please let me know how could i resolve this issue. Thanks in Advance, Kamal.

    Read the article

  • php using wamp server start up error

    - by mathirengasamy
    i try to install moodle web software...i m using wamp server and sqlserver 2005 version. i install php driver for php5.3.0 thread safe version..i just paste that php_sqlsrv_ts.dll driver file into my php ext directory...i include this line extension=php_sqlsrv_ts.dll in my php.ini file... now i restart my wampserver...i m getting this error... PHP Startup: sqlsrv: Unable to initialize module Module compiled with module API=20060613 PHP compiled with module API=20090626 These options need to match get this error also in my apache log file ADODB Error: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified please anybody help me..

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >