Search Results

Search found 40744 results on 1630 pages for 'sql interview questions a'.

Page 721/1630 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • how to select distinct rows for a column

    - by Satoru.Logic
    Hi, all. I have a table x that's like the one bellow: id | name | observed_value | 1 | a | 100 | 2 | b | 200 | 3 | b | 300 | 4 | a | 150 | 5 | c | 300 | I want to make a query so that in the result set I have exactly one record for one name: (1, a, 100) (2, b, 200) (5, c, 300) If there are multiple records corresponding to a name, say 'a' in the table above, I just pick up one of them. In my current implementation, I make a query like this: select x.* from x , (select distinct name, min(observed_value) as minimum_val from x group by name) x1 where x.name = x1.name and x.observed_value = x1.observed_value; But I think there may be some better way around, please tell me if you know, thanks in advance.

    Read the article

  • Difference between "and" and "where" in joins

    - by Midhat
    Whats the difference between SELECT DISTINCT field1 FROM table1 cd JOIN table2 ON cd.Company = table2.Name and table2.Id IN (2728) and SELECT DISTINCT field1 FROM table1 cd JOIN table2 ON cd.Company = table2.Name where table2.Id IN (2728) both return the same result and both have the same explain output

    Read the article

  • joining two tables and getting aggregate data

    - by alex
    how do i write a query that returns aggregate sales data for California in the past x months. ----------------------- ----------------------- | order | | customer | |-----------------------| |-----------------------| | orderId int | | customerId int | | customerId int | | state varchar | | deposit decimal | ----------------------- | orderDate date | ----------------------- ----------------------- | orderItem | |-----------------------| | orderId int | | itemId int | | qty int | | lineTotal decimal | | itemPrice decimal | -----------------------

    Read the article

  • Client to server data upload

    - by RickBowden
    I'm trying to design a system similar to the traditional server monitoring systems like MOM, Tivoli, Open View, where an agent will record data and then upload it to a central database once a day, but them also be able to send immediate alerts back to the server. I'm not sure what the best methodology might be for this. I've started looking at Microsoft sync services but I'm not sure if it will fit my needs. I'm using VS2008 and C#. Does anyone have any experience or ideas about how I should go about this task?

    Read the article

  • Selecting distinct values from mysql with largest timestamp

    - by user987048
    I am building a mail system. The inbox is only supposed to grab the last message (one with the highest time value) of a concatenation of user and sender, where the user or sender is the user ID. Here is the table structure: CREATE TABLE IF NOT EXISTS `mail` ( `user` int(11) NOT NULL, `sender` int(11) NOT NULL, `body` text NOT NULL, `new` enum('0','1') NOT NULL default '1', `time` int(11) NOT NULL, KEY `user` (`user`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, with a table with the following data: user sender new time ***************************************** 1 0 0 5 1 0 0 6 2 1 0 7 1 0 1 8 1 2 0 9 1 0 1 11 1 2 1 12 I want to select the following: WHERE USER OR SENDER = X (in this case, 1) user sender new time ***************************************** 2 1 0 7 1 2 0 9 1 0 1 11 How would I go about doing something like this?

    Read the article

  • sports league database design

    - by John
    Hello, I'm developing a database to store statistics for a sports league. I'd like to show several tables: - league table that indicates the position of the team in the current and previous fixture - table that shows the position of a team in every fixture in the championship I have a matches table: Matches (IdMatch, IdTeam1, IdTeam2, GoalsTeam1, GoalsTeam2) Whith this table I can calculate the total points of every team based on the matches the team played. But every time I want to show the league table I have to calculate the points. Also I have a problem to calculate in which position classified a team in the last 10 fixtures cause I have to make 10 queries. To store the league table for every fixture in a database table is another approach, but every time I change a match already played I have to recalculate every fixture from there... Is there a better approach for this problem? Thanks

    Read the article

  • what is the 'extra' mean in this django code..

    - by zjm1126
    TOPIC_COUNT_SQL = """ SELECT COUNT(*) FROM topics_topic WHERE topics_topic.object_id = maps_map.id AND topics_topic.content_type_id = %s """ MEMBER_COUNT_SQL = """ SELECT COUNT(*) FROM maps_map_members WHERE maps_map_members.map_id = maps_map.id """ maps = maps.extra(select=SortedDict([ ('member_count', MEMBER_COUNT_SQL), ('topic_count', TOPIC_COUNT_SQL), ]), select_params=(content_type.id,)) i don't know this mean, thanks

    Read the article

  • Multisite Enabling a Table

    - by Joe Fitzgibbons
    I am creating a table (table A) that will have a number of columns(of course) and there will be another table (table B) that holds metadata associated to rows in table A. I am working with a multi site implementation that has one database for the whole shabang. Rows in table A could belong to any number of sites but must belong to at least one. The problem I have is I am not sure what the best practice is for defining what site each row in table A belongs to. I want performance and scalability. There is no finite number of sites going forward. Rows in table A could belong to any number of sites in the future. Right now there are only 3. My initial thoughts are to have a primary site ID in Table A and then metadata in table B will have rows defining additional sites as needed. Another thought is to have a column in Table A for each site and it is a boolean as to wether it belongs to that site. Lastly I have thought about having another table to map rows in Table A to each site. What is the best way to associate rows in a table with any number of sites with performance and scalability in mind?

    Read the article

  • Consolidating separate Loan, Purchase & Sales tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column "transaction.trx_type" [char(1)] {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user=wise to have all types of transactions under one table.

    Read the article

  • How to structure data... Sequential or Hierarchical?

    - by Ryan
    I'm going through the exercise of building a CMS that will organize a lot of the common documents that my employer generates each time we get a new sales order. Each new sales order gets a 5 digit number (12222,12223,122224, etc...) but internally we have applied a hierarchy to these numbers: + 121XX |--01 |--02 + 122XX |--22 |--23 |--24 In my table for sales orders, is it better to use the 5 digital number as an ID and populate up or would it be better to use the hierarchical structure that we use when referring to jobs in regular conversation? The only benefit to not populating sequentially seems to be formatting the data later on in my view, but that doesn't sound like a good enough reason to go through the extra work. Thanks

    Read the article

  • Database table schema design - varchar(n). Suitable choice of N

    - by morpheous
    Coming from a C background, I may be getting too anal about this and worrying unnecessarily about bits and bytes here. Still, I cant help thinking how the data is actually stored and that if I choose an N which is easily factorizable into a power of 2, the database will be more effecient in how it packs data etc. Using this "logic", I have a string field in a table which is a variable length up to 21 chars. I am tempted to use 32 instead of 21, for the reason given above - however now I am thinking that I am wasting disk space because there will be space allocated for 11 extra chars that are guaranteed to be never used. Since I envisage storing several tens of thousands of rows a day, it all adds up. Question: Mindful of all of the above, Should I declare varchar(21) or varchar(32) and why?

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • Database design

    - by Hadad
    Hello, I've a system, that have two types of users (Companies and individuals).all types have a shared set of properties but they differ in another. What is the best design merge all in one table that allows null for unmatched properties, or separate them in two tables related to a basic table with a one to one relationship. Thanks.

    Read the article

  • MySQL: How to copy rows, but change a few fields?

    - by Andrew
    I have a large number of rows that I would like to copy, but I need to change one field. I can select the rows that I want to copy: select * from Table where Event_ID = "120" Now I want to copy all those rows and create new rows while setting the Event_ID to 155. How can I accomplish this?

    Read the article

  • How do I get the earlist DateTime of a set, where there is a few conditions

    - by radbyx
    Create script for Product SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[Product]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [ProductName] [varchar](50) NOT NULL, CONSTRAINT [PK_Products] PRIMARY KEY CLUSTERED ( [ProductID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Create script for StateLog SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[StateLog]( [StateLogID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Status] [bit] NOT NULL, [TimeStamp] [datetime] NOT NULL, CONSTRAINT [PK_Uptime] PRIMARY KEY CLUSTERED ( [StateLogID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[StateLog] WITH CHECK ADD CONSTRAINT [FK_Uptime_Products] FOREIGN KEY([ProductID]) REFERENCES [dbo].[Product] ([ProductID]) GO ALTER TABLE [dbo].[StateLog] CHECK CONSTRAINT [FK_Uptime_Products] GO I have this and it's not enough: select top 5 [ProductName], [TimeStamp] from [Product] inner join StateLog on [Product].ProductID = [StateLog].ProductID where [Status] = 0 order by TimeStamp desc; (My query givess the 5 lastest TimeStamp's where Status is 0(false).) But I need a thing more: Where there is a set of lastest TimeStamps for a product where Status is 0, i only want the earlist of them (not the lastet). Example: Let's say for Product X i have: TimeStamp1(status = 0) TimeStamp2(status = 1) TimeStamp3(status = 0) TimeStamp4(status = 0) TimeStamp5(status = 1) TimeStamp6(status = 0) TimeStamp7(status = 0) TimeStamp8(status = 0) Correct answer would then be:: TimeStamp6, because it's the first of the lastest timestamps.

    Read the article

  • Database design 1 to 1 relationship

    - by Khou
    I design my database incorrectly, should I fix this while its in development? "user" table is suppose to have a 1.1 relationship with "userprofile" table however the actual design the "user" table has a 1.* relationship with "userprofile" table. Everything works! but should it be fixed anyways?

    Read the article

  • Improve SQL query performance

    - by Anax
    I have three tables where I store actual person data (person), teams (team) and entries (athlete). The schema of the three tables is: In each team there might be two or more athletes. I'm trying to create a query to produce the most frequent pairs, meaning people who play in teams of two. I came up with the following query: SELECT p1.surname, p1.name, p2.surname, p2.name, COUNT(*) AS freq FROM person p1, athlete a1, person p2, athlete a2 WHERE p1.id = a1.person_id AND p2.id = a2.person_id AND a1.team_id = a2.team_id AND a1.team_id IN ( SELECT id FROM team, athlete WHERE team.id = athlete.team_id GROUP BY team.id HAVING COUNT(*) = 2 ) GROUP BY p1.id ORDER BY freq DESC Obviously this is a resource consuming query. Is there a way to improve it?

    Read the article

  • Adding Related Entities without using navigation properties

    - by Barisa Puter
    I have the following classes, set for testing: public class Company { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int Id { get; set; } public string Name { get; set; } } public class Employee { [DatabaseGenerated(DatabaseGeneratedOption.Identity)] public int Id { get; set; } public string Name { get; set; } public int CompanyId { get; set; } public virtual Company Company { get; set; } } public class EFTestDbContext : DbContext { public DbSet<Employee> Employees { get; set; } public DbSet<Company> Companies { get; set; } } For the sake of testing, I wanted to insert one company and one employee for that company with single SaveChanges call, like this: Company company = new Company { Name = "Sample company" }; context.Companies.Add(company); // ** UNCOMMENTED FOR TEST 2 //Company company2 = new Company //{ // Name = "Some other company" //}; //context.Companies.Add(company2); Employee employee = new Employee { Name = "Hans", CompanyId = company.Id }; context.Employees.Add(employee); context.SaveChanges(); Even though I am not using navigational properties, but instead I've made relation over Id, this somehow mysteriously worked - employee was saved with proper foreign key to company which got updated from 0 to real value, which made me go ?!?! Some hidden C# feature? Then I've decided to add more code, which is commented in the snippet above, making it to be inserting of 2 x Company entity and 1 x Employee entity, and then I got exception: Unable to determine the principal end of the 'CodeLab.EFTest.Employee_Company' relationship. Multiple added entities may have the same primary key. Does this mean that in cases where foreign key is 0, and there is a single matching entity being inserted in same SaveChanges transaction, Entity Framework will assume that foreign key should be for that matching entity? In second test, when there are two entities matching the relation type, Entity Framework throws an exception as it is not able to figure out to which of the Companies Employee should be related to.

    Read the article

  • SQLite: Simple DELETE statement did not work

    - by user186446
    I have a table MRU, that has 3 columns. (VALUE varchar(255); TYPE varchar(20); DT_ADD datetime) This is a table simply storing an entry and recording the date time it was recorded. What I wanted to do is: delete the oldest entry whenever I add a new entry that exceeds a certain number. here is my query: delete from MRU where type = 'FILENAME' ORDER BY DT_ADD limit 1; The query returns an error. Thanks

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • How to handle Foreign Keys with Entity Framework

    - by Jack Marchetti
    I have two entities. Groups. Pools. A Group can create many pools. So I setup my Pool table to have a GroupID foreign key. My code: using (entity _db = new entity()) { Pool p = new Pool(); p.Name = "test"; p.Group.ID = "5"; _db.AddToPool(p); } This doesn't work. I get a null reference exception on p.Group. How do I go about creating a new "Pool" and associating a GroupID?

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >