Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 647/1630 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • How Can I truncate Multiple Tables in MySql?

    - by Luiscencio
    I need to clear all my inventory tables. I've tryed SELECT 'TRUNCATE TABLE ' + TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME LIKE 'inventory%' but I get this error: Truncated incorrect DOUBLE value: 'TRUNCATE TABLE ' Error Code 1292 if this is the correct way, then what am I doing wrong?

    Read the article

  • Why would I do an inner join on a non-distinct field?

    - by froadie
    I just came across a query that does an inner join on a non-distinct field. I've never seen this before and I'm a little confused about this usage. Something like: SELECT distinct all, my, stuff FROM myTable INNER JOIN myOtherTable ON myTable.nonDistinctField = myOtherTable.nonDistinctField (WHERE some filters here...) I'm not quite sure what my question is or how to phrase it, or why exactly this confuses me, but I was wondering if anyone could explain why someone would need to do an inner join on a non-distinct field and then select only distinct values...? Is there ever a legitimate use of an inner join on a non-distinct field? What would be the purpose? And if there's is a legitimate reason for such a query, can you give examples of where it would be used?

    Read the article

  • Re-indexing table; update with from

    - by David Thorisson
    The query says it all, I can't find out the right syntax without without using a for..next UPDATE Webtree SET Webtree.Sorting=w2.Sorting FROM ( SELECT BranchID, CASE WHEN @Index>=ROW_NUMBER() OVER(ORDER BY Sorting ASC) THEN ROW_NUMBER() OVER(ORDER BY Sorting ASC) ELSE ROW_NUMBER() OVER(ORDER BY Sorting ASC)+1 END AS Sorting FROM Webtree w2 WHERE w2.ParentID=@ParentID ) WHERE Webtree.BranchID=w2.BranchID

    Read the article

  • Are database triggers evil?

    - by WW
    Are database triggers a bad idea? In my experience they are evil, because they can result in surprising side effects, and are difficult to debug (especially when one trigger fires another). Often developers do not even think of looking if there is a trigger. On the other hand, it seems like if you have logic that must occur evertime a new FOO is created in the database then the most foolproof place to put it is an insert trigger on the FOO table. The only time we're using triggers is for really simple things like setting the ModifiedDate.

    Read the article

  • How to move from untyped DataSets to POCO\LINQ2SQL in legacy application

    - by artvolk
    Good day! I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable. Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable. The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model". What is the fast\easy way of doing this?

    Read the article

  • Group keywords by site

    - by Skudd
    I am finding a lot of useful help here today, and I really appreciate it. This should be the last one for the day: I have a list of the top 10 keywords per site, sorted by visits, by date. The records need to be sorted as follows (excuse the formatting): 2010-05 2010-04 site1.com keyword1 apples wine keyword1 visits 100 12 keyword2 oranges water keyword2 visits 99 10 site2.com keyword1 blueberry cornbread keyword1 visits 90 100 keyword2 squares biscuits keyword2 visits 80 99 Basically what I need to accomplish involves grouping, but I can't seem to figure it out. Am I heading down the right path, or is there another way to achieve this, or is it just impossible?

    Read the article

  • Select count() max() Date

    - by DAVID
    I have a table with shifts history along with emp ids. I'm using this query to retrieve a list of employees and their total shifts by specifying the range to count from: SELECT ope_id, count(ope_id) FROM operator_shift WHERE ope_shift_date >=to_date( '01-MAR-10','dd-mon-yy') and ope_shift_date <= to_date('31-MAR-10','dd-mon-yy') GROUP BY OPE_ID which gives OPE_ID COUNT(OPE_ID) 1 14 2 7 3 6 4 6 5 2 6 5 7 2 8 1 9 2 10 4 10 rows selected. How do I choose the employee with the highest number of shifts under the specified range date?

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Compiled Queries and "Parameters cannot be sequences"

    - by David B
    I thought that compiled queries would perform the same query translation as DataContext. Yet I'm getting a run-time error when I try to use a query with a .Contains method call. Where have I gone wrong? //private member which holds a compiled query. Func<DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> compiledFiftyRecordQuery = System.Data.Linq.CompiledQuery.Compile <DataAccess.DataClasses1DataContext, List<int>, List<DataAccess.TestRecord>> ((dc, ids) => dc.TestRecords.Where(tr => ids.Contains(tr.ID)).ToList()); //this method calls the compiled query. public void FiftyRecordCompiledQueryByID() { List<int> IDs = GetRandomInts(50); //System.NotSupportedException //{"Parameters cannot be sequences."} List<DataAccess.TestRecord> results = compiledFiftyRecordQuery (myContext, IDs); }

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • Unique identifiers for users

    - by Christopher McCann
    If I have a table of a hundred users normally I would just set up an auto-increment userID column as the primary key. But if suddenly we have a million users or 5 million users then that becomes really difficult because I would want to start becoming more distributed in which case an auto-increment primary key would be useless as each node would be creating the same primary keys. Is the solution to this to use natural primary keys? I am having a real hard time thinking of a natural primary key for this bunch of users. The problem is they are all young people so they do not have national insurance numbers or any other unique identifier I can think of. I could create a multi-column primary key but there is still a chance, however miniscule of duplicates occurring. Does anyone know of a solution? Thanks

    Read the article

  • why this left join query failed to load all the data in left table ?

    - by lzyy
    users table +-----+-----------+ | id | username | +-----+-----------+ | 1 | tom | | 2 | jelly | | 3 | foo | | 4 | bar | +-----+-----------+ groups table +----+---------+-----------------------------+ | id | user_id | title | +----+---------+-----------------------------+ | 2 | 1 | title 1 | | 4 | 1 | title 2 | +----+---------+-----------------------------+ the query SELECT users.username,users.id,count(groups.title) as group_count FROM users LEFT JOIN groups ON users.id = groups.user_id result +----------+----+-------------+ | username | id | group_count | +----------+----+-------------+ | tom | 1 | 2 | +----------+----+-------------+ where is the rest users' info? the result is the same as inner join , shouldn't left join return all left table's data? PS:I'm using mysql

    Read the article

  • How to calculate the sum of a column in an MS Access table for a given date (a single day, month or year)

    - by cMinor
    I have a table in Access in a custom format saved as dd/MM/yyyy hh:mm:ss tt , Also A form in VB.NET 2010, I get a specific day, month and year with no problem but the problem comes when I want to query the sum of a column named value depending on a specific month or day or year.... The table is like: +-----+-----------+-------------------------+ | id | value | date | +-----+-----------+-------------------------+ | id1 | 1499 | 01/01/2012 07:30:11 p.m.| | id2 | 1509 | 11/02/2012 07:30:11 p.m.| | id3 | 1611 | 21/10/2012 07:30:11 p.m.| | id1 | 1115 | 11/10/2012 07:30:11 p.m.| | id1 | 1499 | 17/05/2012 07:30:11 p.m.| | id2 | 1709 | 11/06/2012 07:30:11 p.m.| | id3 | 1911 | 30/07/2012 07:30:11 p.m.| | id1 | 1015 | 01/08/2012 07:30:11 p.m.| | id1 | 1000 | 11/05/2012 07:30:11 p.m.| |+-----+-----------+------------------------+ So I Know the query SELECT SUM(value) FROM mytable WHERE date in='01/05/2012 00:00:00' ... How to tell the query I want the month of May so I would get 1499+1000= 2499 Or how to tell I want the year 2012 so I would get the sum of all the table Which would be the correct syntax...

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Sqlserver 2005 Replication issue

    - by Francesco
    Hi, I have a peer to peer Merge Replication from two Sqlserver 2005. The first Sqlserver is both the publisher and distribuitor. All works fine but if the VPN goes down for a couple of hours the replication goes down too and I need to manually restart the sqlserver agent. In the Sqlserver Agent properties I have set the two option about the agent failover but nothing. How I can set an automatic restart of sqlserver agent when the VPN goes down? Thanks

    Read the article

  • Two different tables or just one with bool column?

    - by Aidas
    We have two tables: OriginalDocument and ProcessedDocument. In the first one we put an original, not processed document. After it's validated and processed (converted to our xml format and parsed), it's put into Document table. Processed document can be valid or invalid. Which makes more sense: have two different tables for valid and invalid documents or just have one with 'Valid' column? Some of the columns (~5-7) are irrelevant for invalid document. Storing both invalid and valid documents would also make Document table filled with 'NULL' columns (if document is invalid, information like document number, receiver can be unknown). What else should we consider and weigh, when making this decision?

    Read the article

  • privmsg system db schema

    - by Bartek
    I'm making a PM-system on my site. And I want to know ultimate db schema. I have always just used only 1 table. But my users have started complained that the messages in their outbox suddently dissapers =D Thats because if the other users deletes it, the one who sent it wont see it to. So im thinking of making another table with the same fields So im thinking something like this: privmsgs id | to | from | subject | message | date -- -- ---- ------- ------- ---- 1 76 893 blabla. blabla. 20100404 sent_msgs id | to | from | subject | message | date -- -- ---- ------- ------- ---- 1 76 893 blabla. blabla. 20100404 Whatya think? Sorry for my bad english

    Read the article

  • Calling sp and Performance strategy.

    - by Costa
    Hi I find my self in a situation where I have to choose between either creating a new sp in database and create the middle layer code. so loose some precious development time. also the procedure is likely to contain some joins. Or use two existing sp(s), the problem of this approach is that I am doing two round trips to database. which can be poor performance especially if I have database in another server. Which approach you will go?, and why? thanks

    Read the article

  • Displaying a single rank in MySQL table

    - by MichaelInno
    I have a table called 'highscores' that looks like this. id udid name score 1 1111 Mike 200 2 3333 Joe 300 3 4444 Billy 50 4 0000 Loser 10 5 DDDD Face 400 Given a specific udid, I want to return the rank of that row by their score value. i.e. if udid given = 0000, I should return 5. Any idea how to write this query for a MySQL database?

    Read the article

  • select rows with column that is not null?

    - by fayer
    by default i have one column in mysql table to be NULL. i want to select some rows but only if the field value in that column is not NULL. what is the correct way of typing it? $query = "SELECT * FROM names WHERE id = '$id' AND name != NULL"; is this correct?

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >