Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 758/2544 | < Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >

  • MySQL, return only rows where there are duplicates among two columns.

    - by Richard Waite
    I have a table in MySQL of contact information ; first name, last name, address, etc. I would like to run a query on this table that will return only rows with first and last name combinations which appear in the table more than once. I do not want to group the "duplicates" (which may only be duplicates of the first and last name, but not other information like address or birthdate) - I want to return all the "duplicate" rows so I can look over the results and determine if they are dupes or not. This seemed like it would be a simple thing to do, but it has not been. Every solution I can find either groups the dupes and gives me a count only (which is not useful for what I need to do with the results) or doesn't work at all. Is this kind of logic even possible in a query ? Should I try and do this in Python or something?

    Read the article

  • How to: Display multiple related classes in an ASP.NET GridView ?

    - by kversch
    I would like to display students and their grades with a GridView and LinqToSQL like this: assignment1 assignment2 Student 1 55 89 Student 2 87 56 Student 3 92 34 I found this topic but it doesn't answer my question: http://forums.asp.net/t/1557987.aspx I have a many-to-many relationship between students and assignments called "grades". The grade for the assignment is stored in that table in a "gradeNumber" column. I would also like to specify which assignments should be displayed in the grid. Btw, my LINQ entities are extended to allow me to write/get studentx.Assignments or assignmentx.Students.

    Read the article

  • Copy new records from datatable and identify changes in old records

    - by Betite
    Assume there are two tables: Remote_table and My_table. Remote_table has 6 columns: **PROJECT JOB_TYPE MONTH YEAR** HOURS IS_DELETED 134393 70 1 2013 30 0 134393 70 2 2013 50 0 134393 70 3 2013 80 0 134393 70 10 2012 10 0 134393 70 11 2012 0 0 134393 70 12 2012 15 0 My_table is a copy of remote_table. I tried to copy only the new records from the remote_table by this query: SELECT * FROM [remote_DB].[LudanProjectManager].[dbo].Remote_table EXCEPT SELECT * FROM My_table It works OK but I get a duplicate primary key exception when changes have been made on the remote_table on the hours column. Can anyone think of a way to copy only the new records from remote_table and if changes has been made on old records, to identify them and update the my_table to correspond?

    Read the article

  • get records from sqlite group by month

    - by peacmaker
    hi i hve an sqlite db which contain transactions each transaction has an price and has an transDate i want to retrieve the sum of the transaction group by month so the retrieved records should be like the following Price month 230 2 500 3 400 4 pleas any help

    Read the article

  • VB.NET: SQLite to MSSQL

    - by user1736785
    I have a vb.net project that uses a SQLite database. I do this by using dataset/table adapters. The client is happy and all works well. However I have just heard that they plan on providing this product to another customer that wishes to use their MSSQL database. So I am writing this post so I can mentally prepare for this before I begin. I am not a database pro and have really enjoyed the simplicity of setting up and managing an SQLite database. So any ideas on the easiest way to support MSSQL as well? I am happy to run them parallel to each other. Can I just make a separate service / middleware that syncs the SQLite database to the MSSQL on a timer and does not care about what the main app is up to? Any pointers are appreciated.

    Read the article

  • Migrating Data to MSSQL 2008

    - by Fred Clown
    I am trying to migrate data from an Informix database to MSSQL 2008. I've got quite a lot of data to move. I've been try multiple methods to get the data over, and so far SQLBulkCopy in multiple chunks seems to be the fastest that I can find. Does anyone know of a faster means of getting the data over? I'm trying to cut down on the transfer time so that on my cut-over date I don't run out of time to do the full cut-over. Thanks.

    Read the article

  • Migrating from mssql to firebird: pro and cons

    - by user193655
    i am considering the migration for 3 reasons: 1) SQLSERVER installation is a nightmar, expecially for 1-user software. Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier. 2) SQLSERVER runs on windows server only 3) My customers have all the express edition 4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express eidtion has 4/10GB db size limit So these are all Pros of moving to Firebird. Which are the cons? I can also plan to support both platiforms, but this will backfire I fear.

    Read the article

  • MySQL: Limit output according to associated ID

    - by Jess
    So here's my situation. I have a books table and authors table. An author can have many books... In my authors page view, the user (logged in) can click an author in a tabled row and be directed to a page displaying the author's books (collected like this URI format: viewauthorbooks.php?author_id=23), very straight forward... However, in my query, I need to display the books for the author only, and not all books stored in the books table (as i currently have!) As I am a complete novice, I used the most simple query of: SELECT * FROM tasks_tb This returns the books for me, but returns every single value (book) in the database, and not ones associated with the selected author. And when I click a different author the same books are displayed for them...I think everyone gets what I'm trying to achieve, I just don't know how to perform the query. I'm guessing that I need to start using more advanced query clauses like INNER JOIN etc. Anyone care to help me out :)

    Read the article

  • DB Interface Design Optimization: Is it better to optimise for Fewer requests of smaller data size?

    - by Overflow
    The prevailing wisdom in webservices/web requests in general is to design your api such that you use as few requests as possible, and that each request returns therefore as much data as is needed In database design, the accepted wisdom is to design your queries to minimise size over the network, as opposed to minimizing the number of queries. They are both remote calls, so what gives?

    Read the article

  • Procedure Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32).

    - by Nick
    The stored proc is failing at below location,Thanks, for all your help. --Insert MSOrg Information DECLARE @PersonnelNumber int, @MSOrg varchar(255) DECLARE csr CURSOR FAST_FORWARD FOR SELECT PersonnelNumber FROM Person OPEN csr FETCH NEXT FROM csr INTO @PersonnelNumber WHILE @@FETCH_STATUS = 0 BEGIN EXEC GetMSOrg @PersonnelNumber, @MSOrg out INSERT INTO PersonSubject ( PersonnelNumber ,SubjectID ,SubjectValue ,Created ,Updated ) SELECT @PersonnelNumber ,SubjectID ,@MSOrg ,getDate() ,getDate() FROM Subject WHERE DisplayName = 'MS Org' FETCH NEXT FROM csr INTO @PersonnelNumber END CLOSE csr DEALLOCATE csr Below is the stored prc defination GetMSOrg and fails at third condition CREATE PROCEDURE [dbo].[GetMSOrg] ( @PersonnelNumber int ,@OrgTerm varchar(200) out ) AS DECLARE @MDRTermID int ,@ReportsToPersonnelNbr int --Check to see if we have reached the top of the chart SELECT @ReportsToPersonnelNbr = ReportsToPersonnelNbr FROM ReportsTo WHERE PersonnelNumber = @PersonnelNumber IF (@ReportsToPersonnelNbr IS NULL) --Reached the Top of the Org Ladder BEGIN SET @OrgTerm = 'Non-standard rollup' END ELSE IF (@PersonnelNumber IN (SELECT PersonnelNumber FROM OrgTermMap)) BEGIN SELECT @OrgTerm = s.Term FROM OrgTermMap tm JOIN Taxonomy..StaticHierarchy s ON tm.OrgTermID = s.TermID WHERE tm.PersonnelNumber = @PersonnelNumber END ELSE BEGIN SELECT @MDRTermID = tm.OrgTermID FROM ReportsTo r JOIN OrgTermMap tm ON r.ReportsToPersonnelNbr = tm.PersonnelNumber WHERE r.PersonnelNumber = @PersonnelNumber IF (@MDRTermID IS NULL) BEGIN EXEC GetMSOrg @ReportsToPersonnelNbr, @OrgTerm out END ELSE BEGIN SELECT @OrgTerm = Term FROM Taxonomy..StaticHierarchy WHERE VocabID = 118 AND TermID = @MDRTermID END END GO

    Read the article

  • Voting Script, Possibility of Simplifying Database Queries

    - by Sev
    I have a voting script which stores the post_id and the user_id in a table, to determine whether a particular user has already voted on a post and disallow them in the future. To do that, I am doing the following 3 queries. SELECT user_id, post_id from votes_table where postid=? AND user_id=? If that returns no rows, then: UPDATE post_table set votecount = votecount-1 where post_id = ? Then SELECT votecount from post where post_id=? To display the new votecount on the web page Any better way to do this? 3 queries are seriously slowing down the user's voting experience Edit In the votes table, vote_id is a primary key In the post table, post_id is a primary key. Any other suggestions to speed things up?

    Read the article

  • SQLException: incorrect syntax near '2'.

    - by Tobechukwu Ezenachukwu
    whenever I call the "ExecuteNonQuery" command on the following CommandText, I get the above SQLException myCommand.CommandText = "INSERT INTO fixtures (round_id, matchcode, date_utc, time_utc, date_london, time_london, team_A_id, team_A, team_A_country, team_B_id, team_B, team_B_country, status, gameweek, winner, fs_A, fs_B, hts_A, hts_B, ets_A, ets_B, ps_A, ps_B, last_updated) VALUES (" _ & round_id & "," & match_id & "," & date_utc & ",'" & time_utc & "'," & date_london & ",'" & time_london & "'," & team_A_id & ",'" & team_A_name & "','" & team_A_country & "'," & team_B_id & ",'" & team_B_name & "','" & _ team_B_country & "','" & status & "'," & gameweek & ",'" & winner & "'," & fs_A & "," & fs_B & "," & hts_A & "," & hts_B & "," & ets_A & "," & ets_B & "," & ps_A & "," & ps_B & "," & last_updated & ")" But whenever, i remove the last table item - "last_updated", the error disappears. Please help me resolve this issue. Is there any special treatment to be given to datetime fields??? Thanks for your help

    Read the article

  • (NOT) NULL for NVARCHAR columns

    - by Anders Abel
    Allowing NULL values on a column is normally done to allow the absense of a value to be represented. When using NVARCHAR there is aldready a possibility to have an empty string, without setting the column to NULL. In most cases I cannot see a semantical difference between an NVARCHAR with an empty string and a NULL value for such a column. Setting the column as NOT NULL saves me from having to deal with the possibility of NULL values in the code and it feels better to not have to different representations of "no value" (NULL or an empty string). Will I run into any other problems by setting my NVARCHAR columns to NOT NULL. Performance? Storage size? Anything I've overlooked on the usage of the values in the client code?

    Read the article

  • Exporting many tables on Oracle

    - by Adomas
    Hi, I would like to know, how to export many tables from oracle DB. I use exp.exe, create file expdat.dmp and so on. I choose to export only tables and there I must write which ones. Is there any chance of getting all of them? thanks

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • Highlight row in report?

    - by sanjeev40084
    I have a SSRS report which displays hundred of rows. I was wondering if there is anyway i can highlight the rows so that i can easily know on which row i am while accessing the report. Any thoughts?

    Read the article

  • Strange behavior with large Object Types

    - by Peter Lang
    I recognized that calling a method on an Oracle Object Type takes longer when the instance gets bigger. The code below just adds rows to a collection stored in the Object Type and calls the empty dummy-procedure in the loop. Calls are taking longer when more rows are in the collection. When I just remove the call to dummy, performance is much better (the collection still contains the same number of records): Calling dummy: Not calling dummy: 11 0 81 0 158 0 Code to reproduce: Create Type t_tab Is Table Of VARCHAR2(10000); Create Type test_type As Object( tab t_tab, Member Procedure dummy ); Create Type Body test_type As Member Procedure dummy As Begin Null; --# Do nothing End dummy; End; Declare v_test_type test_type := New test_type( New t_tab() ); Procedure run_test As start_time NUMBER := dbms_utility.get_time; Begin For i In 1 .. 200 Loop v_test_Type.tab.Extend; v_test_Type.tab(v_test_Type.tab.Last) := Lpad(' ', 10000); v_test_Type.dummy(); --# Removed this line in second test End Loop; dbms_output.put_line( dbms_utility.get_time - start_time ); End run_test; Begin run_test; run_test; run_test; End; I tried with both 10g and 11g. Can anyone explain/reproduce this behavior?

    Read the article

  • How to query range of data in DB2 with highest performance?

    - by Fuangwith S.
    Usually, I need to retrieve data from a table in some range; for example, a separate page for each search result. In MySQL I use LIMIT keyword but in DB2 I don't know. Now I use this query for retrieve range of data. SELECT * FROM( SELECT SMALLINT(RANK() OVER(ORDER BY NAME DESC)) AS RUNNING_NO , DATA_KEY_VALUE , SHOW_PRIORITY FROM EMPLOYEE WHERE NAME LIKE 'DEL%' ORDER BY NAME DESC FETCH FIRST 20 ROWS ONLY ) AS TMP ORDER BY TMP.RUNNING_NO ASC FETCH FIRST 10 ROWS ONLY but I know it's bad style. So, how to query for highest performance?

    Read the article

  • How can i learn Table Name in database an column name?

    - by Phsika
    How can i learn table Name in database an how can i learn any Table's Column name? SELECT Col.COLUMN_NAME, Col.DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS AS Col LEFT OUTER JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE AS Usg ON Col.TABLE_NAME = Usg.TABLE_NAME AND Col.COLUMN_NAME = Usg.COLUMN_NAME LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS Con ON Usg.CONSTRAINT_NAME = Con.CONSTRAINT_NAME WHERE Col.TABLE_NAME = 'Addresses_Temp' AND Con.Constraint_TYPE = 'PRIMARY KEY' But it returns to me empty data:(

    Read the article

  • Why can't you return a List from a Compiled Query?

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

  • How would I implement separate databases for reading and writing operations?

    - by Matt
    I am interested in implementing an architecture that has two databases one for read operations and the other for writes. I have never implemented something like this and have always built single database, highly normalised systems so I am not quite sure where to begin. I have a few parts to this question. 1. What would be a good resource to find out more about this achitecture? 2. Is it just a question of replicating between two identical schemas, or would your schemas differ depending on the operations, would normalisation vary too? 3. How do you insure that data written to one database is immediately available for reading from the second? Any further help, tips, resources would be appreciated. Thanks.

    Read the article

< Previous Page | 754 755 756 757 758 759 760 761 762 763 764 765  | Next Page >