Search Results

Search found 27337 results on 1094 pages for 't sql'.

Page 619/1094 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • Ant database rebuild script, avoiding interactive prompting

    - by fras85
    Hi Guys. I'm writing an ant script to rebuild our database i.e. dropping everything and rebuilding from scratch. The problem our DBA adds a Y/N prompt before executing the rest of the script, and therefore we can't call this from an automated build process. Does anyone have any suggestions to circumvent the Y/N prompt? Obviously we could create seperate scripts, one for the DBA's and one for the automated build - but this requires maintaning both. We're running on Windows so it's not as easy as using sed to strip out the prompt...but i'm thinking something along those lines. Not sure if that's clear enough but hope you can help. Cheers.

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

  • Date and time Query - problem

    - by Gold
    hi i try to run this query: select * from WorkTbl where ((Tdate = '20100414' AND Ttime = '06:00') and (Tdate <= '20100415' AND Ttime <= '06:00')) i have this date: 14/04/2010 and time: 14:00 i cant see hem, how to fix the query ? thank's in advance

    Read the article

  • Error when restoring SSAS cube: MemberKeyesUnique element at line xxx cannot appear under (...)/Hier

    - by Phil
    Hi, I'm getting this error when trying to restore an SSAS database from a backup: The ddl2:MemberKeysUnique element at line 63, column 4862 (namespace http://schemas.microsoft.com/analysisservices/2003/engine/2) cannot appear under Load/ObjectDefinition/Dimension/Hierarchies/Hierarchy. Google hasn't turned up any helpful solutions. (a lot of people found that installing SP2 made the error go away but this has always previously worked in our environment) I don't really understand what the error means. Can somebody interpret or suggest a fix? Thanks, Phil

    Read the article

  • Should I use integer primary IDs?

    - by arthurprs
    For example, I always generate an auto-increment field for the users table, but I also specify a UNIQUE index on their usernames. There are situations that I first need to get the userId for a given username and then execute the desired query, or use a JOIN in the desired query. It's 2 trips to the database or a JOIN vs. a varchar index. Should I use integer primary IDs? Is there a real performance benefit on INT over small VARCHAR indexes?

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • How Can i Create This Complicated Query ?

    - by mTuran
    Hi, I have 3 tables: projects, skills and project_skills. In projects table i hold project's general data. Second table skills i hold skill id and skill name also i have projects_skills table which is hold project's skill relationships. Here is scheme of tables: CREATE TABLE IF NOT EXISTS `project_skills` ( `project_id` int(11) NOT NULL, `skill_id` int(11) NOT NULL, KEY `project_id` (`project_id`), KEY `skill_id` (`skill_id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci; CREATE TABLE IF NOT EXISTS `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `employer_id` int(11) NOT NULL, `project_title` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `project_description` text COLLATE utf8_turkish_ci NOT NULL, `project_budget` int(11) NOT NULL, `project_allowedtime` int(11) NOT NULL, `project_deadline` datetime NOT NULL, `total_bids` int(11) NOT NULL, `average_bid` int(11) NOT NULL, `created` datetime NOT NULL, `active` tinyint(1) NOT NULL, PRIMARY KEY (`id`), KEY `created` (`created`), KEY `employer_id` (`employer_id`), KEY `active` (`active`), FULLTEXT KEY `project_title` (`project_title`,`project_description`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=3 ; CREATE TABLE IF NOT EXISTS `skills` ( `id` int(11) NOT NULL AUTO_INCREMENT, `category` int(11) NOT NULL, `name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `seo_name` varchar(100) COLLATE utf8_turkish_ci NOT NULL, `total_projects` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `seo_name` (`seo_name`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_turkish_ci AUTO_INCREMENT=224 ; I want to select projects with related skill names. I think i have to use JOIN but i don't know how can i do. Thanks

    Read the article

  • Insert Stored Procedure, Using Asp.Net WebForm to Insert new [Customer]

    - by user2953815
    Can someone please help me create a stored procedure to insert a new customer from a web form. I am having difficulty making the state a drop down list for customer to pick a state from the list and having that inserted into the database. INSERT INTO Customer ( Cust_First, Cust_Middle, Cust_Last, Cust_Phone, Cust_Alt_Phone, Cust_Email, Add_Line1, Add_Line2, Add_Bill_Line1, Add_Bill_Line2, City, State_Prov_Name, Postal_Zip_Code, Country_ID ) VALUES ( @Cust_First, @Cust_Middle, @Cust_Last, @Cust_Phone, @Cust_Alt_Phone, @Cust_Email, @Add_Line1, @Add_Line2, @Add_Bill_Line1, @Add_Bill_Line2, @City, @State_Prov_Name, @Postal_Zip_Code, @Country_ID )"> <InsertParameters> <asp:Parameter Name="Cust_First" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_Middle" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_Last" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_Phone" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_Alt_Phone" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_Email" Type="String"></asp:Parameter> <asp:Parameter Name="Add_Line1" Type="String"></asp:Parameter> <asp:Parameter Name="Add_Line2" Type="String"></asp:Parameter> <asp:Parameter Name="Add_Bill_Line1" Type="String"></asp:Parameter> <asp:Parameter Name="Add_Bill_Line2" Type="String"></asp:Parameter> <asp:Parameter Name="City" Type="String"></asp:Parameter> <asp:Parameter Name="Postal_Zip_Code" Type="String"></asp:Parameter> <asp:Parameter Name="Cust_ID" Type="Int32"></asp:Parameter> <asp:Parameter Name="State_Prov_Name" Type="String"></asp:Parameter> <asp:Parameter Name="Country_Name" Type="String"></asp:Parameter> </InsertParameters>

    Read the article

  • Does clustered index on foreign key column increase join performance vs non-clustered ?

    - by alpav
    In many places it's recommended that clustered indexes are better utilized when used to select range of rows using BETWEEN statement. When I select joining by foreign key field in such a way that this clustered index is used, I guess, that clusterization should help too because range of rows is being selected even though they all have same clustered key value and BETWEEN is not used. Considering that I care only about that one select with join and nothing else, am I wrong with my guess ?

    Read the article

  • Will this SQL screw up

    - by Joshua
    I'm sure everyone knows the joys of concurrency when it comes to threading. Imagine the following scenario on every page-load on a noobily set up MySQL db: UPDATE stats SET visits = (visits+1) If a thousand users load the page at same time, will the count screw up? is this that table locking/row locking crap? Which one mysql use.

    Read the article

  • SSRS Dynamic Returning Dataset Collection Field in Expression

    - by Ray Clark
    I wrote a custom assembly to take a parameter value from the report and return a field from the dataset collection. My assembly returns the correct fields!name.value, but it shows me the string representation of it. How can I get it to resolve as the actual fields!name.value to display the actual data in the dataset? If I enter fields!name.value in manually it works fine showing me the value. If I resolve it with my custom code it display "fields!name.value" to me in the cell.

    Read the article

  • GridView will not update underlying data source

    - by John Christensen
    So I'm been pounding on this problem all day. I've got a LinqDataSource that points to my model and a GridView that consumes it. When I attempt to do an update on the GridView, it does not update the underlying data source. I thought it might have to do with the LinqDataSource, so I added a SqlDataSource and the same thing happens. The aspx is as follows (the code-behind page is empty): <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="Data Source=devsql32;Initial Catalog=Steam;Persist Security Info=True;" ProviderName="System.Data.SqlClient" SelectCommand="SELECT [LangID], [Code], [Name] FROM [Languages]" UpdateCommand="UPDATE [Languages] SET [Code]=@Code WHERE [LangID]=@LangId"> </asp:SqlDataSource> <asp:GridView ID="_languageGridView" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataKeyNames="LangId" DataSourceID="SqlDataSource1"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="LangId" HeaderText="Id" ReadOnly="True" /> <asp:BoundField DataField="Code" HeaderText="Code" /> <asp:BoundField DataField="Name" HeaderText="Name" /> </Columns> </asp:GridView> <asp:LinqDataSource ID="_languageDataSource" ContextTypeName="GeneseeSurvey.SteamDatabaseDataContext" runat="server" TableName="Languages" EnableInsert="True" EnableUpdate="true" EnableDelete="true"> </asp:LinqDataSource> What in the world am I missing here? This problem is driving me insane.

    Read the article

  • Observing social web behavior: to log or populate databases?

    - by jlafay
    When considering social web app architecture, is it a better approach to document user social patterns in a database or in logs? I thought for sure that behavior, actions, events would be strictly database stored but I noticed that some of the larger social sites out there also track a lot by logging what happens. Is it good practice to store prominent data about users in a database and since thousands of user actions can be spawned easily, should they be simply logged?

    Read the article

  • How do I Put Several Select Statements into Different Columns

    - by Russ Bradberry
    I basically have 7 select statement that I need to have the results output into separate columns. Normally I would use a crosstab for this but I need a fast efficient way to go about this as there are over 7 billion rows in the table. I am using the vertica database system. Below is an example of my statements: SELECT COUNT(user_id) AS '1' FROM event_log_facts WHERE date_dim_id=20100101 SELECT COUNT(user_id) AS '2' FROM event_log_facts WHERE date_dim_id=20100102 SELECT COUNT(user_id) AS '3' FROM event_log_facts WHERE date_dim_id=20100103 SELECT COUNT(user_id) AS '4' FROM event_log_facts WHERE date_dim_id=20100104 SELECT COUNT(user_id) AS '5' FROM event_log_facts WHERE date_dim_id=20100105 SELECT COUNT(user_id) AS '6' FROM event_log_facts WHERE date_dim_id=20100106 SELECT COUNT(user_id) AS '7' FROM event_log_facts WHERE date_dim_id=20100107

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • How do I replace NOT EXISTS with JOIN?

    - by YelizavetaYR
    I've got the following query: select distinct a.id, a.name from Employee a join Dependencies b on a.id = b.eid where not exists ( select * from Dependencies d where b.id = d.id and d.name = 'Apple' ) and exists ( select * from Dependencies c where b.id = c.id and c.name = 'Orange' ); I have two tables, relatively simple. The first Employee has an id column and a name column The second table Dependencies has 3 column, an id, an eid (employee id to link) and names (apple, orange etc). the data looks like this Employee table looks like this id | name ----------- 1 | Pat 2 | Tom 3 | Rob 4 | Sam Dependencies id | eid | Name -------------------- 1 | 1 | Orange 2 | 1 | Apple 3 | 2 | Strawberry 4 | 2 | Apple 5 | 3 | Orange 6 | 3 | Banana As you can see Pat has both Orange and Apple and he needs to be excluded and it has to be via joins and i can't seem to get it to work. Ultimately the data should only return Rob

    Read the article

  • Single Query returning me 4 tables, How to get all of them back into dataset ?

    - by Shantanu Gupta
    How to fill multiple tables in a dataset. I m using a query that returns me four tables. At the frontend I am trying to fill all the four resultant table into dataset. Here is my Query. Query is not complete. But it is just a refrence for my Ques Select * from tblxyz compute sum(col1) suppose this query returns more than one table, I want to fill all the tables into my dataset I am filling result like this con.open(); adp.fill(dset); con.close(); Now when i checks this dataset. It shows me that it has four tables but only first table data is being displayed into it. rest 3 dont even have schema also. What i need to do to get desired output

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • data type mismatch when comparing dates in MS-Access

    - by jonos
    I have dates stored in an MS-Access table in the 'General Date' format. I'm trying to create a query that returns records between a specific date range (all records from March 2010) however I encounter a 'data type mismatch in critera expression' message. Here is my statement; SELECT Loan.loan_datetimeLeant, product_name, [product_artist/director], product_category, loanItem_cost FROM Loan INNER JOIN ((Product INNER JOIN Item ON Product.[product_id] = Item.[product_id]) INNER JOIN Loan_Items ON Item.[item_id] = Loan_Items.[item_id]) ON (Loan.[cust_id] = Loan_Items.[cust_id]) AND (Loan.[loan_datetimeLeant] = Loan_Items.[loan_datetimeLeant]) WHERE Loan.loan_datetimeLeant = '01/03/2010' AND Loan.loan_datetimeLeant <= '31/03/2010' ORDER BY Loan.loan_datetimeLeant ; I have tried variations on the date format (mm/dd/yyyy, dd/mm/yyyy 00:00:00)

    Read the article

  • Generate dynamic UPDATE command from Expression<Func<T, T>>

    - by Rui Jarimba
    I'm trying to generate an UPDATE command based on Expression trees (for a batch update). Assuming the following UPDATE command: UPDATE Product SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1 For an expression like this: Expression<Func<Product, Product>> updateExpression = entity => new Product() { ProductTypeId = 123, ProcessAttempts = entity.ProcessAttempts + 1 }; How can I generate the SET part of the command? SET ProductTypeId = 123, ProcessAttempts = ProcessAttempts + 1

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >