Search Results

Search found 38640 results on 1546 pages for 'full table scan'.

Page 382/1546 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Need Associated ID Added to a While Loop (php)

    - by user319361
    Been trying to get my head around while loops for the last few days but the code seems very inefficient for what Im trying to achieve. I'm assuming I'm overcomplicating this though nothing I've tried seems to work. Each topic in my forum can have related topic IDs stored in a seperate table. A post ID is also stored in this table, as that specific post references why they are considered related. DB Table contains only: topic_id, related_id, post_id // Get related IDs and post IDs for current topic being viewed $result = $db->query('SELECT related_id, post_id FROM related_topics WHERE topic_id='.$id.''); // If related topics found, put both of the IDs into arrays if ($db->num_rows($result)) { while($cur_related = mysql_fetch_array($result)){ $reltopicarray[] = $cur_related['related_id']; $relpost[] = $cur_related['post_id']; } // If the first array isnt empty, get some additional info about each related ID from another table if(!empty($reltopicarray)) { $pieces = $reltopicarray; $glued = "\"".implode('", "', $pieces)."\""; $fetchtopics = $db->query('SELECT id, subject, author, image, etc FROM topics WHERE id IN('.$glued.')'); } // Print each related topic while($related = mysql_fetch_array($fetchtopics)){ ?> <a href="view.php?id=<?php echo $related['id']; ?>"><?php echo $related['subject']; ?></a> by <?php echo $related['author']; ?> // Id like to show the Post ID below (from the array in the first while loop) // The below link doesnt work as Im outside the while loop by this point. <br /><a href="view.php?post_id=<?php echo $cur_related['post_id']; ?>">View Relationship</a> <?php } ?> The above currently works, however I'm trying to also display the post_id link below each related topic link, as shown above. Would be greatful if someone can lend a hand. Thanks :)

    Read the article

  • Getting the most recent post based on date

    - by camcim
    Hi guys, How do I go about displaying the most recent post when I have two tables, both containing a column called creation_date This would be simple if all I had to do was get the most recent post based on posts created_on value however if a post contains replies I need to factor this into the equation. If a post has a more recent reply I want to get the replies created_on value but also get the posts post_id and subject. The posts table structure: CREATE TABLE `posts` ( `post_id` bigint(20) unsigned NOT NULL auto_increment, `cat_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `subject` tinytext NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `status` varchar(10) NOT NULL default 'INACTIVE', `private_post` varchar(10) NOT NULL default 'PUBLIC', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`post_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=7 ; The replies table structure: CREATE TABLE `replies` ( `reply_id` bigint(20) unsigned NOT NULL auto_increment, `post_id` bigint(20) NOT NULL, `user_id` bigint(20) NOT NULL, `comments` text NOT NULL, `created_on` datetime NOT NULL, `notify` varchar(5) NOT NULL default 'YES', `status` varchar(10) NOT NULL default 'INACTIVE', `db_location` varchar(10) NOT NULL, PRIMARY KEY (`reply_id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ; Here is my query so far. I've removed my attempt of extracting the dates. $strQuery = "SELECT posts.post_id, posts.created_on, replies.created_on, posts.subject "; $strQuery = $strQuery."FROM posts ,replies "; $strQuery = $strQuery."WHERE posts.post_id = replies.post_id "; $strQuery = $strQuery."AND posts.cat_id = '".$row->cat_id."'";

    Read the article

  • How to store unlimited characters in Oracle 11g?

    - by vicky21
    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons. The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries. Is there any better solution to this problem? UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.

    Read the article

  • Why is PHP discriminating between .php and .abc extensions for caching?

    - by Sam
    There seems to be a problem between how PHP engine handles identical files that differ only in their file extension. Problem: "An If-Modified-Since conditional request returned the full content unchanged." Also, I measured that the .php extension loads much faster than identitcal twin with .xxx extension even though the file contents are identical, and they differ only in their file extension. "HTTP allows clients to make conditional requests to see if a copy that they hold is still valid. Since this response has a Last-Modified header, clients should be able to use an If-Modified-Since request header for validation. RED has done this and found that the resource sends a full response even though it hadn't changed, indicating that it doesn't support Last-Modified validation." homepage ending with .php exact same file, but ending .ast Given: A home.php file is copied as home.xxx and this extension is added to htaccess to recognize it as a PHP file. The .php file listen to the php.ini where freshness is set to 3 hrs, the non .php files have to listen to htaccess where freshness is set to 2 hrs according to: AddType application/x-httpd-php .php .ast .abc .xxx .etc <IfModule mod_headers.c> ExpiresActive On ExpiresDefault M2419200 Header unset ETag FileETag None Header unset Pragma Header set Cache-Control "max-age=2419200" ##### DYNAMIC PAGES <FilesMatch "\\.(ast|php|abc|xxx)$"> ExpiresDefault M7200 Header set Cache-Control "public, max-age=7200" </FilesMatch> </IfModule> So far so good and everything loads, except, the non-php file doesn't cache properly, or it does cache well but doesn't validate well, to be more specific. See images enclosed. Only the non-php file extension causes the error and loads slower. The entire page.php loads faster as somehow all the elements in there then load properly from cache, while the page.abc has the full request returned while it ought to be cached, meaning the entire page is slower. Bottom line: What should be changed, in order eliminate the If-Modified-Since conditional request returning the full content unchanged?

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • gridview image column problem

    - by jame
    Work on vs05 C# asp.net .My SQL Syntax is :**** if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Images]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[Images] GO CREATE TABLE [dbo].[Images] ( [ID] [numeric](18, 0) IDENTITY (1, 1) NOT NULL , [ImageName] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL , [Image] [image] NULL ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO I want to show this Images table values in a grid view.....I do it ...but the image value can not show ....asp.net syntax for gridview is <asp:GridView ID="GridView2" runat="server" AutoGenerateColumns="False"> <Columns> <asp:BoundField DataField="ImageName" HeaderText="ImageName" /> <asp:TemplateField HeaderText="Image"> <EditItemTemplate> <asp:TextBox ID="TextBox1" runat="server" Text='<%# Eval("Image") %>'></asp:TextBox> </EditItemTemplate> <ItemTemplate> <asp:Image ID="Image1" runat="server" ImageUrl='<%# Eval("Image") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> i write the below code on pageload event. i want images table values must shown when the page is load... string strSQL = "Select * From Images"; DataTable dt = clsDB.getDataTable(strSQL); this.GridView2.DataSource = dt; this.GridView2.DataBind(); Why not i get the image on my image column of the gridview.....what's the problem is how to solve?

    Read the article

  • How are you supposed to layout a page in VS2010 without using tables?

    - by CrustyApple
    I have been using .NET since beta and HTML since the days of HotDog pro & notepad, using table layout of course. I am FINALLY ready to use only div, li, CSS for the layout, but my question is, what is the proper way to layout pages in VS2010? When i use table layout its simple and i can visually see what im creating and where the elements are, such as the sample below - how should I do this using div's, etc in VS2010? <table width="300" border="0" cellpadding="5"> <tr> <td><img src="http://assets.devx.com/MS_Azure/azuremcau.jpg" alt="blah" width="70" height="70" /></td> <td><h2>This is some text to the right of the picture...</h2></td> </tr> <tr> <td colspan="2">Here some text underneath</td> </tr> </table>

    Read the article

  • GROUP BY ID range?

    - by d0ugal
    Given a data set like this; +-----+---------------------+--------+ | id | date | result | +-----+---------------------+--------+ | 121 | 2009-07-11 13:23:24 | -1 | | 122 | 2009-07-11 13:23:24 | -1 | | 123 | 2009-07-11 13:23:24 | -1 | | 124 | 2009-07-11 13:23:24 | -1 | | 125 | 2009-07-11 13:23:24 | -1 | | 126 | 2009-07-11 13:23:24 | -1 | | 127 | 2009-07-11 13:23:24 | -1 | | 128 | 2009-07-11 13:23:24 | -1 | | 129 | 2009-07-11 13:23:24 | -1 | | 130 | 2009-07-11 13:23:24 | -1 | | 131 | 2009-07-11 13:23:24 | -1 | | 132 | 2009-07-11 13:23:24 | -1 | | 133 | 2009-07-11 13:23:24 | -1 | | 134 | 2009-07-11 13:23:24 | -1 | | 135 | 2009-07-11 13:23:24 | -1 | | 136 | 2009-07-11 13:23:24 | -1 | | 137 | 2009-07-11 13:23:24 | -1 | | 138 | 2009-07-11 13:23:24 | 1 | | 139 | 2009-07-11 13:23:24 | 0 | | 140 | 2009-07-11 13:23:24 | -1 | +-----+---------------------+--------+ How would I go about grouping the results by day 5 records at a time. The above results is part of the live data, there is over 100,000 results rows in the table and its growing. Basically I want to measure the change over time, so want to take a SUM of the result every X records. In the real data I'll be doing it ever 100 or 1000 but for the data above perhaps every 5. If i could sort it by date I would do something like this; SELECT DATE_FORMAT(date, '%h%i') ym, COUNT(result) 'Total Games', SUM(result) as 'Score' FROM nn_log GROUP BY ym; I can't figure out a way of doing something similar with numbers. The order is sorted by the date but I hope to split the data up every x results. It's safe to assume there are no blank rows. Doing it above with the data you could do multiple selects like; SELECT SUM(result) FROM table LIMIT 0,5; SELECT SUM(result) FROM table LIMIT 5,5; SELECT SUM(result) FROM table LIMIT 10,5; Thats obviously not a very good way to scale up to a bigger problem. I could just write a loop but I'd like to reduce the number of queries.

    Read the article

  • Index an array expression directly in PostgreSQL

    - by wich
    I'm trying to insert data into a table from a template table. I need to rewrite one of the columns for which I wanted to use a directly indexed array expression, but I can't seem to find how to do this, if it is even possible. The scenario: create table template ( id integer, index integer, foo integer); insert into template values (0, 1, 23), (0, 2, 18), (0, 3, 16), (0, 4, 7), (1, 1, 17), (1, 2, 26), (1, 3, 11), (1, 4, 3); create table data ( data_id integer, foo integer); Now what I'd like to do is the following: insert into data select (array[3,7,5,2])[index], foo from template where id = 1; But this doesn't work, the (array[3,7,5,2])[index] syntax isn't valid. I tried a few variants, but was unable to get anything working and wasn't able to find the correct syntax in the docs, nor even whether this is at all possible or not. As a current workaround I've devised the following, but it is less than ideal, from an elegance perspective at least, but it may also be a performance hit, I haven't looked into that yet. insert into data select arr[index], foo from template, (select array[3,7,5,2] as arr) as q where id = 1; If anyone could suggest a (better) alternative to accomplish this I'd like to hear that as well.

    Read the article

  • how to make changes to database in WPF?

    - by user353573
    Drag access database into designer xsd, toolkit:datagrid get datacontext from resources correctly show the table content through this binding. Press button in the button column, can show each row correctly. delete row and add row also work in datagrid however, when i press button column to add a new row or delete a row, there is no change in the underlying database, how to commit the changes from datagrid to database in WPF if not using ADO.net ? The following code is what i try do not work private void datagrid2_delete(object sender, RoutedEventArgs e) { Button showButton = (Button)sender; //person p = (person)showButton.DataContext; DataRowView ds = (DataRowView)showButton.DataContext; //DataSet1.customertableRow p = (DataSet1.customertableRow)ds; //ds.BeginEdit(); //ds.Delete(); //ds.EndEdit(); /* DataSourceProvider provider = (DataSourceProvider)this.FindResource("Family"); WpfApplication1.DataSet1.customertableDataTable table = (WpfApplication1.DataSet1.customertableDataTable)provider.Data; table.AddcustomertableRow("hello3", 5, System.DateTime.Today); table.AcceptChanges(); }

    Read the article

  • Delete only reference to child object

    - by Al
    I'm running in to a bit of a problem where any attempt to delete just the reference to a child also deletes the child record. My schema looks like this Person Organisation OrganisationContacts : Person OrgId PersonId Role When removing an Organisation i want to only delete the record in OrgnaisationContacts, but not touch the Person record. My Mapping looks like this Code: public OrganisationMap() { Table("Organsations"); .... HasMany<OrganisationContact>(x => x.Contacts) .Table("OrganisationContacts ") .KeyColumn("OrgId") .Not.Inverse() .Cascade.AllDeleteOrphan(); } public class OrganisationContactMap : SubclassMap<OrganisationContact> { public OrganisationContactMap() { Table("OrganisationContacts"); Map(x => x.Role, "PersonRole"); Map(x => x.IsPrimary); } } At the moment just removing a contact from the IList either doesn't reflect in the database at all, or it issues two delete statements DELETE FROM OrganisationContact & DELETE FROM Person, or tries to set PersonId to null in the OrganisationContacts table. All of which are not desirable. Any help would be great appreciated.

    Read the article

  • Efficient way to store order in mySQL for list of items

    - by ninumedia
    I want to code cleaner and more efficiently and I wanted to know any other suggestions for the following problem: I have a mySQL database that holds data about a set of photograph names. Oh, say 100 photograph names Table 1: (photos) has the following fields: photo_id, photo_name Ex data: 1 | sunshine.jpg 2 | cloudy.jpg 3 | rainy.jpg 4 | hazy.jpg ... Table 2: (categories) has the following fields: category_id, category_name, category_order Ex data: 1 | Summer Shots | 1,2,4 2 | Winter Shots | 2,3 3 | All Seasons | 1,2,3,4 ... Is it efficient to store the order of the photos in this manner per entry via comma delimited values? It's one approach I have seen used before but I wanted to know if something else is faster in run time. Using this way I don't think it is possible to do a direct INNER JOIN on the category table and photo table to get a single matched list of all the photographs per category. Ex: Summer shots - sunshine.jpg, cloudy.jpg, hazy.jpg because it was matched against 1,2,4 The iteration through all the categories and then the photos will have a O(n^2) and there has to be a better/faster way. Please educate me :)

    Read the article

  • SQL n:m Inheritance join

    - by Nightmares
    I want to join a table which contains n:m relationship between groups. (Groups are defined in a separate table). This table only has entries listing a member_group_id and a parent_group_id. Given this structure: id(int) | member_group_id(int) | parent_group_id(int) The "base" query looks like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id The "base" query correctly shows all relationships (I checked by doing it manually.) The problem is when I try to apply a where clause to this query to filter for a specific group as "point of origin" (the first group for which I want all parent groups) it returns only the closest parents. For example like this: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.member_group_id = p1.member_group_id where p1.group_id = 1 Can anyone give a clue how I can fix this? Or a different approach to realize this. (I suppose I could always do this in my C++ source code on the server side but I would have to transfer a entire table which has a high growth potential to the application server.) UPDATE: select p1.group_id, p2.group_id, p1.member_group_id, p2.member_group_id from group_member_group as p1 join group_member_group as p2 on p2.group_id = p1.member_group_id Typing mistake confirmed. Now I don't get past first level of inheritance period. Thanks at denied for pointing that out.

    Read the article

  • Regarding some Update Stored procedure

    - by Serenity
    I have two Tables as follows:- Table1:- ------------------------------------- PageID|Content|TitleID(FK)|LanguageID ------------------------------------- 1 |abc |101 |1 2 |xyz |102 |1 -------------------------------------- Table2:- ------------------------- TitleID|Title |LanguageID ------------------------- 101 |Title1|1 102 |Title2|1 ------------------------ I don't want to add duplicates in my Table1(Content Table). Like..there can be no two Pages with the same Title. What check do I need to add in my Insert/Update Stored Procedure ? How do I make sure duplicates are never added. I have tried as follows:- CREATE PROC InsertUpdatePageContent ( @PageID int, @Content nvarchar(2000), @TitleID int ) AS BEGIN IF(@PageID=-1) BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN INSERT INTO Table1(Content,TitleID) VALUES(@Content,@TitleID) END END ELSE BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN UPDATE Table1 SET Content=@Content,TitleID=@TitleID WHERE PAGEID=@PAGEID END END END Now what is happening is that it is inserting new records alright and won't allow duplicates to be added but when I update its giving me problem. On my aspx Page I have a drop down list control that is bound to DataSource that returns Table 2(Title Table) and I have a text box in which user types Page's content to be stored. When I update, like lets say I have a row in my Table 1 as shown above with PageID=1. Now when I am updating this row, like I didn't change the Title from the drop down and only changed Content in the text box, its not updating the record ..and when Stored procedure's Update Query does not execute it displays a Label that says "Page with this title exists already." So whenever I am updating an existing record that label is displayed on screen.How do I change that IF condition in my Update Stored procedure?? EDIT:- @gbn :: Will that IF condition work in case of update? I mean lets say I am updating the Page with TitleID=1, I changed its content, then when I update, it's gonna execute that IF condition and it still won't update coz TitleID=1 already exits!It will only update if TitleID=1 is not there in Table1. Isn't it? Guess I am getting confused. Please answer.Thanks.

    Read the article

  • Users Hierarchy Logic

    - by user342944
    Hi guys, I am writing a user security module using SQLServer 2008 so threfore need to design a database accordingly. Formally I had Userinfo table with UserID, Username and ParentID to build a recursion and populated tree to represent hierarchy but now I have following criteria which I need to develop. I have now USERS, ADMINISTRATORS and GROUPS. Each node in the user hierarchy is either a user, administrator or group. User Someone who has login access to my application Administrator A user who may also manage all their child user accounts (and their children etc) This may include creating new users and assigning permissions to those users. There is no limit to the number of administrators in user structure. The higher up in the hierarchy that I go administrators have more child accounts to manage which include other child administrators. Group A user account can be designated as a group. This will be an account which is used to group one or more users together so that they can be manage as a unit. But no one can login to my application using a group account. This is how I want to create structure Super Administrator administrator ------------------------------------------------------------- | | | Manager A Manager B Manager C (adminstrator) (administrator) (administrator) | ----------------------------------------- | | | Employee A Employee B Sales Employees (User) (User) (Group) | ------------------------ | | | Emp C Emp D Emp E (User) (User) (User) Now how to build the table structure to achieve this. Do I need to create Users table alongwith Group table or what? Please guide I would really appreciate.

    Read the article

  • Logic for rate approximation

    - by Rohan
    I am looking for some logic to solve the below problem. There are n transaction amounts : T1,T2,T3.. Tn. Commission for these transactions are calculated using a rate table provided as below. if amount between 0 and A1 - rate is r1 if amount between A1 and A2 - rate is r2 if amount between A2 and A1 - rate is r3 ... ... if amount greater than An - rate is r4 So if T1 < A1 then rate table returns r1 else if r1 < T1 < r2;it returns r2. So,lets says the rate table results for T1,T2 and T3 are r1,r2 and r3 respectively. Commission C = T1 * r1 + T2 * r2 + T3 * r3 e.g; if rate table is defined(rates are in %) 0 - 2500 - 1 2501 - 5000 - 2 5001 - 10000 - 4 10000 or more- 6 If T1 = 6000,T2 = 3000, T3 = 2000, then C= 6000 * 0.04 + 3000* 0.02 + 2000 * 0.01 = 320 Now my problem is whether we can approximate the commission amount if instead of individual values of T1,T2 and T3 we are provided with T1+T2+T3 (T) In the above example if T (11000) is applied to the rate tablewe would get 6% and which would result in a commision of 600. Is there a way to approximate the commission value given T instead of individual values of T1,T2,T3?

    Read the article

  • Can I force the auto-generated Linq-to-SQL classes to use an OUTER JOIN?

    - by Gary McGill
    Let's say I have an Order table which has a FirstSalesPersonId field and a SecondSalesPersonId field. Both of these are foreign keys that reference the SalesPerson table. For any given order, either one or two salespersons may be credited with the order. In other words, FirstSalesPersonId can never be NULL, but SecondSalesPersonId can be NULL. When I drop my Order and SalesPerson tables onto the "Linq to SQL Classes" design surface, the class builder spots the two FK relationships from the Order table to the SalesPerson table, and so the generated Order class has a SalesPerson field and a SalesPerson1 field (which I can rename to SalesPerson1 and SalesPerson2 to avoid confusion). Because I always want to have the salesperson data available whenever I process an order, I am using DataLoadOptions.LoadWith to specify that the two salesperson fields are populated when the order instance is populated, as follows: dataLoadOptions.LoadWith<Order>(o => o.SalesPerson1); dataLoadOptions.LoadWith<Order>(o => o.SalesPerson2); The problem I'm having is that Linq to SQL is using something like the following SQL to load an order: SELECT ... FROM Order O INNER JOIN SalesPerson SP1 ON SP1.salesPersonId = O.firstSalesPersonId INNER JOIN SalesPerson SP2 ON SP2.salesPersonId = O.secondSalesPersonId This would make sense if there were always two salesperson records, but because there is sometimes no second salesperson (secondSalesPersonId is NULL), the INNER JOIN causes the query to return no records in that case. What I effectively want here is to change the second INNER JOIN into a LEFT OUTER JOIN. Is there a way to do that through the UI for the class generator? If not, how else can I achieve this? (Note that because I'm using the generated classes almost exclusively, I'd rather not have something tacked on the side for this one case if I can avoid it).

    Read the article

  • Populating and Using Dynamic Classes in C#/.NET 4.0

    - by Bob
    In our application we're considering using dynamically generated classes to hold a lot of our data. The reason for doing this is that we have customers with tables that have different structures. So you could have a customer table called "DOG" (just making this up) that contains the columns "DOGID", "DOGNAME", "DOGTYPE", etc. Customer #2 could have the same table "DOG" with the columns "DOGID", "DOG_FIRST_NAME", "DOG_LAST_NAME", "DOG_BREED", and so on. We can't create classes for these at compile time as the customer can change the table schema at any time. At the moment I have code that can generate a "DOG" class at run-time using reflection. What I'm trying to figure out is how to populate this class from a DataTable (or some other .NET mechanism) without extreme performance penalties. We have one table that contains ~20 columns and ~50k rows. Doing a foreach over all of the rows and columns to create the collection take about 1 minute, which is a little too long. Am I trying to come up with a solution that's too complex or am I on the right track? Has anyone else experienced a problem like this? Create dynamic classes was the solution that a developer at Microsoft proposed. If we can just populate this collection and use it efficiently I think it could work.

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Child divs not taking parent height

    - by Hiral
    I want height of children div .cell to take up 100% height of parent. But it is not happening. HTML: <div class="header"> header </div> <div class="wrapper"> <div class="padding"> <div class="table bg"> <div class="cell"> hello world </div> <div class="cell"> dummy text </div> </div> </div> </div> <div class="footer"> footer </div> CSS: html,body{height:100%;} body{margin:0;padding:0;} .footer,.header{background:black;height:30px;color:white;} .wrapper{min-height:calc(100% - 60px);height:auto !important;} .padding{padding:20px;} .table{display:table;width:100%;} .cell{display:table-cell;} .bg{background:#ccc;} I think it is not happening because I have .wrapper{min-height:calc(100% - 60px);height:auto !important;} It happens if I change .wrapper to .wrapper{height:calc(100% - 60px);} then it is happening. Here is the fiddle.

    Read the article

  • [MySQL] Efficiently store last X records per item

    - by Saif Bechan
    I want to store the last X records in an MySQL database in an efficient way. So when the 4th record is stored the first should be deleted. The way I do this not is first run a query getting the items. Than check what I should do then insert/delete. There has to be a better way to do this. Any suggestions? Edit I think I should add that the records stored do not have a unique number. They have a mixed par. For example article_id and user_id. Then I want to make a table with the last X items for user_x. Just selecting the article from the table grouped by user and sorted by time is not an option for me. The table where I do the sort and group on has millions of records and gets hit a lot for no reason. So making a table in between with the last X records is way more effient. PS. I am not using this for articles and users.

    Read the article

  • SQL SERVER FOR XML SYNTAX

    - by Raj73
    How can I get an output as follows using FOR XML / sql query. I am not sure how I can get the Column Values as Elements instead of the tables' column Names. I am using sql server 2005 I HAVE TABLE SCEMA AS FOLLOWS CREATE TABLE PARENT ( PID INT, PNAME VARCHAR(20) ) CREATE TABLE CHILD ( PID INT, CID INT, CNAME VARCHAR(20) ) CREATE TABLE CHILDVALUE ( CID INT, CVALUE VARCHAR(20) ) INSERT INTO PARENT VALUES (1, 'SALES1') INSERT INTO PARENT VALUES (2, 'SALES2') INSERT INTO CHILD VALUES (1, 1, 'FOR01') INSERT INTO CHILD VALUES (1, 2, 'FOR02') INSERT INTO CHILD VALUES (2, 3, 'FOR03') INSERT INTO CHILD VALUES (2, 4, 'FOR04') INSERT INTO CHILDVALUE VALUES (1, '250000') INSERT INTO CHILDVALUE VALUES (2, '400000') INSERT INTO CHILDVALUE VALUES (3, '500000') INSERT INTO CHILDVALUE VALUES (4, '800000') The Output I am looking for is as follows <SALE1> <FOR01>250000</FOR01> <FOR02>400000</FOR02> </SALE1> <SALE2> <FOR03>500000</FOR03> <FOR04>800000</FOR04> </SALE2>

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Modify SQL result set before returning from stored procedure

    - by m0sa
    I have a simple table in my SQL Server 2008 DB: Tasks_Table -id -task_complete -task_active -column_1 -.. -column_N The table stores instructions for uncompleted tasks that have to be executed by a service. I want to be able to scale my system in future. Until now only 1 service on 1 computer read from the table. I have a stored procedure, that selects all uncompleted and inactive tasks. As the service begins to process tasks it updates the task_active flag in all the returned rows. To enable scaleing of the system I want to enable deployment of the service on more machines. Because I want to prevent a task being returned to more than 1 service I have to update the stored procedure that returns uncompleted and inactive tasks. I figured that i have to lock the table (only 1 reader at a time - I know I have to use an apropriate ISOLATION LEVEL), and updates the task_active flag in each row of the result set before returning the result set. So my question is how to modify the SELECT result set iin the stored procedure before returning it?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >