Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 472/1022 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • Nested WPF DataGrids

    - by jvberg
    I have a DataGrid (from the toolkit) and I want to nest another DataGrid in the DataGrid.RowDetailsTemplate. The trick is I want to bring back the data from one table in the main grid and then based on row selection go and get additonal detail from a different table and show it in the DataGrid in the detail template. This is easy enough to do in 2 seperate DataGrids but I am having trouble getting it to work with the nested version. Is this even possible? If so, could someone point me in the right direction. I should note I am using LinqToSql clases to populate the data. Thanks for your consideration. -Joel

    Read the article

  • SQL: many-to-many relationship, IN condition

    - by Maarten
    I have a table called transactions with a many-to-many relationship to items through the items_transactions table. I want to do something like this: SELECT "transactions".* FROM "transactions" INNER JOIN "items_transactions" ON "items_transactions".transaction_id = "transactions".id INNER JOIN "items" ON "items".id = "items_transactions".item_id WHERE (items.id IN (<list of items>)) But this gives me all transactions that have one or more of the items in the list associated with it and I only want it to give me the transactions that are associated with all of those items. Any help would be appreciated.

    Read the article

  • Repeating fields in similar database tables

    - by user1738833
    I have been tasked with working on a database that I have never seen before and I'm looking at the DB structure. Some of the central and most heavily queried and joined tables look like virtual duplicates of each other. Here's a massively simplified representation of the situation, with business-sensitive information changed, listing hypothetical table names and fields: TopLevelGroup: PK_TLGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubGroup: PK_SubGroupId, FK_ParentTopLevelGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK SubSubGroup: PK_SubSUbGroupId, FK_ParentSubGroupId, DisplaysXOnBill, DisplaysYOnBill, IsInvoicedForJ, IsInvoicedForK I haven't listed the types of the fields as I don't think it's particularly important to the situation. In addition, it's worth saying that rather than four repeated fields as in the example above, I'm looking at 86 repeated fields. For the most part, those fields genuinely do represent "facts" about the primary table entity, so it's not automatically wrong for that reason. In addition, the "groups" represented here have a property inheritance relationship. If DisplaysXOnBill is NULL in the SubSubGroup, it takes the value of DisplaysXOnBillfrom it's parent, the SubGroup, and so-on up to the TopLevelGroup. Further, the requirements will never require that the model extends beyond three levels, so there is no need for flexibility in that area. Is there a design smell from several tables which describe very similar entities having almost identical fields? If so, what might be a better design of the example above? I'm using the phrase "design smell" to indicate a possible problem. Of course, in any given situation, a particular design might well be the best solution. I'm looking for a more general answer - wondering what might be wrong with this design and what might be the better design were that the case. Possibly related, but not primary questions: Is this database schema in a reasonably normal form (e.g. to 3NF), insofar as can be told from the information I've provided. I can't see a problem with the requirements of 2NF and 3NF, except in their inheriting the requirements of 1NF. Is 1NF satisfied though? Are repeating groups allowed in different tables? Is there a best-practice method for implementing the inheritance relationship in a database as I require? The method above feels clunky to me because any query on the SubSubGroup necessarily needs to join onto the SubGroup and the TopLevelGroup tables to collect inherited facts, which can make even trivial joins requiring facts from the SubSubGroup table rather long-winded. There are, of course, political considerations to making a relatively large change like this. For the purpose of this question, I'm happy to ignore that fact in the interests of keeping the answers ring-fenced to the technical problem.

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • Sorting/Paginating/Filtering Complex Multi-AR Object Tables in Rails

    - by Matt Rogish
    I have a complex table pulled from a multi-ActiveRecord object array. This listing is a combined display of all of a particular user's "favorite" items (songs, messages, blog postings, whatever). Each of these items is a full-fledged AR object. My goal is to present the user with a simplified search, sort, and pagination interface. The user need not know that the Song has a singer, and that the Message has an author -- to the end user both entries in the table will be displayed as "User". Thus, the search box will simply be a dropdown list asking them which to search on (User name, created at, etc.). Internally, I would need to convert that to the appropriate object search, combine the results, and display. I can, separately, do pagination (mislav will_paginate), sorting, and filtering, but together I'm having some problems combining them. For example, if I paginate the combined list of items, the pagination plugin handles it just fine. It is not efficient since the pagination is happening in the app vs. the DB, but let's assume the intended use-case would indicate the vast majority of the users will have less than 30 favorited items and all other behavior, server capabilities, etc. indicates this will not be a bottleneck. However, if I wish to sort the list I cannot sort it via the pagination plugin because it relies on the assumption that the result set is derived from a single SQL query, and also that the field name is consistent throughout. Thus, I must sort the merged array via ruby, e.g. @items.sort_by{ |i| i.whatever } But, since the items do not share common names, I must first interrogate the object and then call the correct sort by. For example, if the user wishes to sort by user name, if the sorted object is a message, I sort by author but if the object is a song, I sort by singer. This is all very gross and feels quite un-ruby-like. This same problem comes into play with the filter. If the user filters on the "parent item" (the message's thread, the song's album), I must translate that to the appropriate collection object method. Also gross. This is not the exact set-up but is close enough. Note that this is a legacy app so changing it is quite difficult, although not impossible. Also, yes there is some DRY that can be done, but don't focus on the style or elegance of the following code. Style/elegance of the SOLUTION is important, however! :D models: class User < ActiveRecord::Base ... has_and_belongs_to_many :favorite_messages, :class_name => "Message" has_and_belongs_to_many :favorite_songs, :class_name => "Song" has_many :authored_messages, :class_name => "Message" has_many :sung_songs, :class_name => "Song" end class Message < ActiveRecord::Base has_and_belongs_to_many :favorite_messages belongs_to :author, :class_name => "User" belongs_to :thread end class Song < ActiveRecord::Base has_and_belongs_to_many :favorite_songs belongs_to :singer, :class_name => "User" belongs_to :album end controller: def show u = User.find 123 @items = Array.new @items << u.favorite_messages @items << u.favorite_songs # etc. etc. @items.flatten! @items = @items.sort_by{ |i| i.created_at } @items = @items.paginate :page => params[:page], :per_page => 20 end def search # Assume user is searching for username like 'Bob' u = User.find 123 @items = Array.new @items << u.favorite_messages.find( :all, :conditions => "LOWER( author ) LIKE LOWER('%bob%')" ) @items << u.favorite_songs.find( :all, :conditions => "LOWER( singer ) LIKE ... " ) # etc. etc. @items.flatten! @items = @items.sort_by{ |i| determine appropriate sorting based on user selection } @items = @items.paginate :page => params[:page], :per_page => 20 end view: #index.html.erb ... <table> <tr> <th>Title (sort ASC/DESC links)</th> <th>Created By (sort ASC/DESC links))</th> <th>Collection Title (sort ASC/DESC links)</th> <th>Created At (sort ASC/DESC links)</th> </tr> <% @items.each |item| do %> <%= render { :partial => "message", :locals => item } if item.is_a? Message %> <%= render { :partial => "song", :locals => item } if item.is_a? Song %> <%end%> ... </table> #message.html.erb # shorthand, not real ruby print out message title, author name, thread title, message created at #song.html.erb # shorthand print out song title, singer name, album title, song created at

    Read the article

  • LINQ DataLoadOptions - Retrieval of data via Fulltext/Broken Foreign Key Relationship

    - by Alex
    I've hit a brick wall: I have an SQL Function: FUNCTION [dbo].[ContactsFTS] (@searchtext nvarchar(4000)) RETURNS TABLE AS RETURN SELECT * FROM Contacts INNER JOIN CONTAINSTABLE(Contacts, *, @searchtext) AS KEY_TBL ON Contacts.Id = KEY_TBL.[KEY] which I am calling via LINQ public IQueryable<ContactsFTSResult> SearchByFullText(String searchText) { return db.ContactsFTS(searchText); } I am projecting the ContactsFTSResult collection into a List<Contact> which is then given to my viewmodel. Here is the problem: My Contacts table (and therefore the Contact object created via LINQ to SQL) has multiple FK relationships to other information, such as Contact.BillingAddressId is an FK to an Address.Id. That information is missing after I do the fulltext search (e.g. if I try to access Contact.BillingAddress it is null). Can I add this information somehow via DataLoadOptions? I tried LoadWith<Contact>(c => c.BillingAddress) but this doesn't work, I assume because of the fact that I'm calling the function instead of doing the whole query via LINQ.

    Read the article

  • SQL: Find difference between dates with grouping

    - by ajbeaven
    I have a problem that seems similar to this fellow - I just want to display the data slightly differently. I'm pretty terrible with SQL so can't modify it to suit, but perhaps someone else can. My table looks similar to this (date format is dd/mm/yyyy): ID User Date_start Role 1 Andy 01/04/2010 A 2 Andy 10/04/2010 B 3 Andy 20/04/2010 A 4 John 02/05/2010 A I want to show the total number of days that anyone was in a certain role. Users stay in the role until there is another entry into the table. Users can only be in one role at a time. So the summary data would look like this (assuming that the date is 04/05/2010): A: 26 days B: 10 days Thanks for any help :)

    Read the article

  • SQL select statement

    - by kwokwai
    Hi all, I got a Table which has two fields: Point, and Level, with some sample data as follows: ----------------------- Point | Level ----------------------- 10 | Level 1 20 | Level 2 30 | Level 3 40 | Level 4 Suppose that there is a user who has 25 points, to find the Level in which this user is in, the Select statement I used was: Select Level from Table where Point < 30 AND Point > 20; But the Select SQL ststament is a hard-copy one where you can see the ponts 30 and 20 are fixed. I want to alter the Select statement so that the new SQL Select statement can be applied to all users with different points, but I don't know how to do it.

    Read the article

  • How to access stdClass variables stdClass Object([max(id)])=>64)

    - by Theopile
    I need the very last valid entry in a database table which would be the row with the greatest primary key. So using mysqli, my query is "SELECT MAX(id) FROM table LIMIT 1". This query returns the correct number(using print_r()) but I cannot figure out how to access it. Here is the main code. Note that the $this-link refers to class with a mysqli connection. $q="select max(id) from stones limit 1"; $qed=$this->link->query($q) or die(mysqli_error()); if($qed){ $row=$qed->fetch_object(); print_r($row); echo $lastid=$row;//here is the problem } The valid line print_r($row) echos out "stdClass Object ( [max(id)] = 68 )"

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • MySQL: updating a row and deleting the original in case it becomes a duplicate

    - by Silvio Donnini
    I have a simple table made up of two columns: col_A and col_B. The primary key is defined over both. I need to update some rows and assign to col_A values that may generate duplicates, for example: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 This statement sometimes yields a duplicate key error. I don't want to simply ignore the error with UPDATE IGNORE, because then the rows that generate the error would remain unchanged. Instead, I want them to be deleted when they would conflict with another row after they have been updated I'd like to write something like: UPDATE `table` SET `col_A` = 66 WHERE `col_B` = 70 ON DUPLICATE KEY REPLACE which unfortunately isn't legal in SQL, so I need help finding another way around. Also, I'm using PHP and could consider a hybrid solution (i.e. part query part php code), but keep in mind that I have to perform this updating operation many millions of times. thanks for your attention, Silvio Reminder: UPDATE's syntax has problems with joins with the same table that is being updated

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • PHP: Strange behaviour while calling custom php functions

    - by baltusaj
    I am facing a strange behavior while coding in PHP with Flex. Let me explain the situation: I have two funcions lets say: populateTable() //puts some data in a table made with flex createXML() //creates an xml file which is used by Fusion Charts to create a chart Now, if i call populateTable() alone, the table gets populated with data but if i call it with createXML(), the table doesn't get populated but createXML() does it's work i.e. creates an xml file. Even if i run following code, only xml file gets generated but table remains empty whereas i called populateTable() before createXML(). Any idea what may be going wrong? MXML Part <mx:HTTPService id="userRequest" url="request.php" method="POST" resultFormat="e4x"> <mx:request xmlns=""> <getResult>send</getResult> </mx:request> and <mx:DataGrid id="dgUserRequest" dataProvider="{userRequest.lastResult.user}" x="28.5" y="36" width="525" height="250" > <mx:columns> <mx:DataGridColumn headerText="No." dataField="no" /> <mx:DataGridColumn headerText="Name" dataField="name"/> <mx:DataGridColumn headerText="Age" dataField="age"/> </mx:columns> PHP Part <?php //-------------------------------------------------------------------------- function initialize($username,$password,$database) //-------------------------------------------------------------------------- { # Connect to the database $link = mysql_connect("localhost", $username,$password); if (!$link) { die('Could not connected to the database : ' . mysql_error()); } # Select the database $db_selected = mysql_select_db($database, $link); if (!$db_selected) { die ('Could not select the DB : ' . mysql_error()); } // populateTable(); createXML(); # Close database connection } //-------------------------------------------------------------------------- populateTable() //-------------------------------------------------------------------------- { if($_POST['getResult'] == 'send') { $Result = mysql_query("SELECT * FROM session" ); $Return = "<Users>"; $no = 1; while ( $row = mysql_fetch_object( $Result ) ) { $Return .= "<user><no>".$no."</no><name>".$row->name."</name><age>".$row->age."</age><salary>". $row->salary."</salary></session>"; $no=$no+1; $Return .= "</Users>"; mysql_free_result( $Result ); print ($Return); } //-------------------------------------------------------------------------- createXML() //-------------------------------------------------------------------------- { $users=array ( "0"=>array("",0), "1"=>array("Obama",0), "2"=>array("Zardari",0), "3"=>array("Imran Khan",0), "4"=>array("Ahmadenijad",0) ); $selectedUsers=array(1,4); //this means only obama and ahmadenijad are selected and the xml file will contain info related to them only //Extracting salaries of selected users $size=count($users); for($i = 0; $i<$size; $i++) { //initialize temp which will calculate total throughput for each protocol separately $salary = 0; $result = mysql_query("SELECT salary FROM userInfo where name='$users[$selectedUsers[$i]][0]'"); $row = mysql_fetch_array($result)) $salary = $row['salary']; } $users[$selectedUsers[$i]][1]=$salary; } //creating XML string $chartContent = "<chart caption=\"Users Vs Salaries\" formatNumberScale=\"0\" pieSliceDepth=\"30\" startingAngle=\"125\">"; for($i=0;$i<$size;$i++) { $chartContent .= "<set label=\"".$users[$selectedUsers[$i]][0]."\" value=\"".$users[$selectedUsers[$i]][1]."\"/>"; } $chartContent .= "<styles>" . "<definition>" . "<style type=\"font\" name=\"CaptionFont\" size=\"16\" color=\"666666\"/>" . "<style type=\"font\" name=\"SubCaptionFont\" bold=\"0\"/>" . "</definition>" . "<application>" . "<apply toObject=\"caption\" styles=\"CaptionFont\"/>" . "<apply toObject=\"SubCaption\" styles=\"SubCaptionFont\"/>" . "</application>" . "</styles>" . "</chart>"; $file_handle = fopen('ChartData.xml','w'); fwrite($file_handle,$chartContent); fclose($file_handle); } initialize("root","","hiddenpeak"); ?>

    Read the article

  • Create Sum of calculated rows in Microsoft Reporting Services

    - by kd7iwp
    This seems like it should be simple but I can't find anything yet. In Reporting Services I have a table with up to 6 rows that all have calculated values and dynamic visibility. I would like to sum these rows. Basically I have a number of invoice items and want to make a total. I can't change anything on the DB side since my stored procedures are used elsewhere in the system. Each row pulls data from a different dataset as well, so I can't do a sum of the dataset. Can I sum all the rows with a table footer? Similarly to totaling a number of rows in Excel? It seems very redundant to put my visibility expression from each row into my footer row to calculate the sum.

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • Remove a certain value from string which keeps on changing

    - by user2971375
    I m trying to make a utility to generate a Insert script of sql tables along with relational table. I got all the values in c#. Now I want to remove the one column name and its value from the script.most probably the identity column. Eg. The string I have (which keeps on changing with table name and varies) INSERT Core.Customers ([customerId],[customername],[customeradress],[ordernumber]) Values(123,N'Rahul',N'244 LIZ MORN',2334) NOW I know I have to remove CustomerId. (sometimes need to be replaces with @somevariable) Please give me an efficient way how to retrieve customerId value and Deleting column name and value . Conditions: insert script length and columns names keep changing. Method should return the deleted value. At some point of time customer id can be same as order id. In that case my string.remove method fails.

    Read the article

  • Unidirectional OneToMany in Doctrine 2

    - by darja
    I have two Doctrine entities looking like that: /** * @Entity * @Table(name = "Locales") */ class Locale { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="IDENTITY") */ private $id; /** @Column(length=2, name="Name", type="string") */ private $code; } /** * @Entity * @Table(name = "localized") */ class LocalizedStrings { /** * @Id @Column(type="integer") * @GeneratedValue(strategy="IDENTITY") */ private $id; /** @Column(name="Locale", type="integer") */ private $locale; /** @Column(name="SomeText", type="string", length=300) */ private $someText; } I'd like to create reference between these entities. LocalizedStrings needs reference to Locale but Locale doesn't need reference to LocalizedStrings. How to write such mapping via Doctrine 2?

    Read the article

  • Multiple Context menus in PyQt based on mouse location

    - by Nader
    I have a window with multiple tables using QTableWidget (PyQt). I created a popup menu using the right click mouse and it works fine. However, I need to create different popup menu based on which table the mouse is hovering over at the time the right mouse is clicked. How can I get the mouse to tell me which table it is hovering over? or, put in another way, how to implement a method so as to have a specific context menu based on mouse location? I am using Python and PyQt. My popup menu is developed similar to this code (PedroMorgan answer from Qt and context menu): class Foo( QtGui.QWidget ): def __init__(self): QtGui.QWidget.__init__(self, None) # Toolbar toolbar = QtGui.QToolBar() # Actions self.actionAdd = toolbar.addAction("New", self.on_action_add) self.actionEdit = toolbar.addAction("Edit", self.on_action_edit) self.actionDelete = toolbar.addAction("Delete", self.on_action_delete) # Tree self.tree = QtGui.QTreeView() self.tree.setContextMenuPolicy( Qt.CustomContextMenu ) self.connect(self.tree, QtCore.SIGNAL('customContextMenuRequested(const QPoint&)'), self.on_context_menu) # Popup Menu self.popMenu = QtGui.QMenu( self ) self.popMenu.addAction( self.actionEdit ) self.popMenu.addAction( self.actionDelete ) self.popMenu.addSeparator() self.popMenu.addAction( self.actionAdd ) def on_context_menu(self, point): self.popMenu.exec_( self.tree.mapToGlobal(point) )

    Read the article

  • Dynamic data-entry value store

    - by simendsjo
    I'm creating a data-entry application where users are allowed to create the entry schema. My first version of this just created a single table per entry schema with each entry spanning a single or multiple columns (for complex types) with the appropriate data type. This allowed for "fast" querying (on small datasets as I didn't index all columns) and simple synchronization where the data-entry was distributed on several databases. I'm not quite happy with this solution though; the only positive thing is the simplicity... I can only store a fixed number of columns. I need to create indexes on all columns. I need to recreate the table on schema changes. Some of my key design criterias are: Very fast querying (Using a simple domain specific query language) Writes doesn't have to be fast Many concurrent users Schemas will change often Schemas might contain many thousand columns The data-entries might be distributed and needs syncronization. Preferable MySQL and SQLite - Databases like DB2 and Oracle is out of the question. Using .Net/Mono I've been thinking of a couple of possible designs, but none of them seems like a good choice. Solution 1: Union like table containing a Type column and one nullable column per type. This avoids joins, but will definitly use a lot of space. Solution 2: Key/value store. All values are stored as string and converted when needed. Also use a lot of space, and of course, I hate having to convert everything to string. Solution 3: Use an xml database or store values as xml. Without any experience I would think this is quite slow (at least for the relational model unless there is some very good xpath support). I also would like to avoid an xml database as other parts of the application fits better as a relational model, and being able to join the data is helpful. I cannot help to think that someone has solved (some of) this already, but I'm unable to find anything. Not quite sure what to search for either... I know market research is doing something like this for their questionnaires, but there are few open source implementations, and the ones I've found doesn't quite fit the bill. PSPP has much of the logic I'm thinking of; primitive column types, many columns, many rows, fast querying and merging. Too bad it doesn't work against a database.. And of course... I don't need 99% of the provided functionality, but a lot of stuff not included. I'm not sure this is the right place to ask such a design related question, but I hope someone here has some tips, know of any existing work, or can point me to a better place to ask such a question. Thanks in advance!

    Read the article

  • ado.net slow updating large tables

    - by brett
    The problem: 100,000+ name & address records in an access table (2003). Need to iterate through the table & update detail with the output from a 3rd party dll. I currently use ado, and it works at an acceptable speed (less than 5 minutes on a network share). We will soon need to update to access 2007 and its 'non jet' accdb format to maintain compatability with clients. I've tried using ado.net datsets, but updating the records takes hours! We process 5-10 of these tables per day - so this cannot be a solution. Any ideas on the fastest way to update individual records using ado.net? Surely we didn't take such a hugh backward step with ado.net? Any help would be appreciated.

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >