Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 178/345 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • LINQ to SQL table naming

    - by Ivo
    I am using VS2010 and C# When I map/select my database tables with LINQ to SQL I have to option to change the "member" propery, but when i delete the table (because I changed something in the schema for example) and add it again the member value gets "reset". Is it possible to set/override this member programmaticly, so that I dont have to change it by hand everytime I mean the member option of '<'Table Name="dbo.table1" Member="table1"

    Read the article

  • MySQL multiple dependent subqueries, painfully slow

    - by matt80
    I have a working query that retrieves the data that I need, but unfortunately it is painfully slow (runs over 3 minutes). I have indexes in place, but I think the problem is the multiple dependent subqueries. I've been trying to rewrite the query using joins but I can't seem to get it to work. Any help would be greatly appreciated. The tables: Basically, I have 2 tables. The first (prices) holds the prices of items in a store. Each row is the price of an item that day, and new rows are added every day with an updated price. The second table (watches_US) holds the item information (name, description, etc). CREATE TABLE `prices` ( `prices_id` int(11) NOT NULL auto_increment, `prices_locale` enum('CA','DE','FR','JP','UK','US') NOT NULL default 'US', `prices_watches_ID` char(10) NOT NULL, `prices_date` datetime NOT NULL, `prices_am` varchar(10) default NULL, `prices_new` varchar(10) default NULL, `prices_used` varchar(10) default NULL, PRIMARY KEY (`prices_id`), KEY `prices_am` (`prices_am`), KEY `prices_locale` (`prices_locale`), KEY `prices_watches_ID` (`prices_watches_ID`), KEY `prices_date` (`prices_date`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=61764 ; CREATE TABLE `watches_US` ( `watches_ID` char(10) NOT NULL, `watches_date_added` datetime NOT NULL, `watches_last_update` datetime default NULL, `watches_title` varchar(255) default NULL, `watches_small_image_height` int(11) default NULL, `watches_small_image_width` int(11) default NULL, `watches_description` text, PRIMARY KEY (`watches_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; The query retrieves the last 10 prices changes over a period of 30 hours, ordered by the size of the price change. So I have subqueries to get the newest price, the oldest price within 30 hours, and then to calculate the price change. Here's the query: SELECT watches_US.*, prices.*, watches_US.watches_ID as current_ID, ( SELECT prices_am FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price, ( SELECT prices_date FROM prices WHERE prices_watches_ID = current_ID AND prices_locale = 'US' ORDER BY prices_date DESC LIMIT 1 ) as new_price_date, ( SELECT prices_am FROM prices WHERE ( prices_watches_ID = current_ID AND prices_locale = 'US') AND ( prices_date >= DATE_SUB(new_price_date,INTERVAL 30 HOUR) ) ORDER BY prices_date ASC LIMIT 1 ) as old_price, ( SELECT ROUND(((new_price - old_price)/old_price)*100,2) ) as percent_change, ( SELECT (new_price - old_price) ) as absolute_change FROM watches_US LEFT OUTER JOIN prices ON prices.prices_watches_ID = watches_US.watches_ID WHERE ( prices_locale = 'US' ) AND ( prices_am IS NOT NULL ) AND ( prices_am != '' ) HAVING ( old_price IS NOT NULL ) AND ( old_price != 0 ) AND ( old_price != '' ) AND ( absolute_change < 0 ) AND ( prices.prices_date = new_price_date ) ORDER BY absolute_change ASC LIMIT 10 How would I rewrite this to use joins instead, or otherwise optimize this so it doesn't take over 3 minutes to get a result? Any help would be greatly appreciated! Thank you kindly.

    Read the article

  • fastest way to upload an xls file into a database

    - by shmichael
    I have an xls file with ~60 sheets of data. I would like to move them into a database (postgres) such that each sheet's data is stored in a different table. What is the fastest way of creating these tables? I don't care about naming or proper typing of columns. The columns could all be strings for that matter. I don't want to run 60 different csv uploads.

    Read the article

  • sql server query/subquery question

    - by parminder
    Hi Experts, I have a tables with data like this Id BookId TagId 34 113421 9 35 113421 10 36 113421 11 37 113421 1 38 113422 9 39 113422 1 40 113422 12 I need to write a query (SQL Server) which gives me data according to the tags say if I want bookIds where tagid =9 it should return bookid 113421 and 113422 as it exists in both the books, but If I ask data for tags 9 and 10 it should return only book 113421 as that is the only book where both the tags are present. Thanks Parminder

    Read the article

  • Why isn't this simple MySQL statement working?

    - by Clark
    I am trying to match a user inputted search term against two tables: posts and galleries. The problem is the union all clause isn't working. Is there something wrong with my code? $query = mysql_query(" SELECT * FROM posts WHERE title LIKE '%$searchTerm%' OR author LIKE '%$searchTerm%' OR location LIKE '%$searchTerm%' OR excerpt LIKE '%$searchTerm%' OR content LIKE '%$searchTerm%' UNION ALL SELECT * FROM galleries WHERE title LIKE '%$searchTerm%' ");

    Read the article

  • how to bind association in RoR

    - by ashok
    I have two tables, AppTemplate and AppTemplateMeta AppTemplate table has column id, MetaID, name etc. I have associated these two model like this class AppTemplate < ActiveRecord::Base set_table_name 'AppTemplate' belongs_to :app_template_meta, :class_name => "AppTemplateMeta", :foreign_key => 'MetaID' end If we fetch data using AppTemplate.all, I want associated meta details also. But currently it's not returning associated meta details. It just returns AppTemplate details. any guys can help me for this

    Read the article

  • Queries stuck in "copying to tmp table"

    - by Parik
    I am running some sample tests against mysql, and finding that there are a bunch of queries which are stuck in "copying to tmp tables". They remain stuck in the same state. They are usually aggregate queries and I can kill those queries. But how can I find out what is causing them to be stuck? I am using mysql 5.1.42 with the innodb plugin.

    Read the article

  • MySql selecting default value if there are no result?

    - by Kenan
    i'm having 2 tables: members and comments. I select all members, and then join comments. But in comments I'm selecting some SUM of points, and if user never commented, I can't get that user in listing?! So how to select default value for SUM, or some other solution: SELECT c.comment_id AS item_id, m.member_id AS member_id, m.avatar, SUM(c.vote_value) AS vote_value, SUM(c.best) AS best, SUM(c.vote_value) + SUM(c.najbolji)*10 AS total FROM members m LEFT JOIN comments c ON m.member_id = c.author_id GROUP BY c.author_id ORDER BY m.member_id DESC LIMIT {$sql_start}, {$sql_pokazi}

    Read the article

  • Best database (mysql) structure for this case:

    - by robert
    we have three types of data (tables): Book (id,name,author...) ( about 3 million of rows) Category (id,name) ( about 2000 rows) Location (id,name) ( about 10000 rows) A Book must have at least 1 type of Category (up to 3) AND a Book must have only one Location. I need to correlate this data to get this query faster: Select Books where Category = 'cat_id' AND Location = 'loc_id' Select Books where match(name) against ('name of book') AND Location = 'loc_id' Please I need some help. Thanks

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Database related Questions

    - by alokpatil
    I am planning to make a railway reservation project... I am maintaining following tables.. trainTable (trainId,trainName,trainFrom,trainTo,trainDate,trainNoOfBoogies)...PK(trainId) Boogie (trainId,boogieId,boogieName,boogieNoOfseats)...CompositeKey(trainId,boogieId)... Seats (trainId,boogieId,seatId,seatStatus,seatType)...CompositeKey(trainId,boogieId,seatId)... user (userId,name...personal details) userBooking (userId,trainId,boogieId,seatId)...Is this good design reply me please...

    Read the article

  • Create web application with ajax from the begining or add ajax later?

    - by klew
    I'm working on my first Ruby on Rails aplication and it's quite big (at least for me ;) - database has about 25 tables). I'm still learning Ruby and Rails and I never wrote anything in Javascript nor Ajax. Should I add Ajax to my application from the begining? Or maybe it will be better to add it latter? Or in the other words: is it (relatively) easy to add ajax to existing web application?

    Read the article

  • Where does Drupal store NODE data?

    - by RD
    This is a follow up to my previous question: http://stackoverflow.com/questions/1284476/where-does-drupal-store-node-body-content Now, I tried adding values into node and node-revision, but still the node data is not showing. So, obviously more data is stored somewhere else. So basically, I want to know, which tables are affected when you create a new node?

    Read the article

  • Is there a fast way to change all the collation to utf8_unicode?

    - by Mark
    I have realised after making about 20 tables that I need to user utf8_unicode as opposed to utf8_general. What is the fastest way to change it using PHPMyAdmin? I had one idea: Export the database as SQL then using notepad run a find and replace on it and then reimport it... but it sounds like a bit of a headache. Is there a better way?

    Read the article

  • DataContext to DB

    - by JD
    Hi all, I have designed my DB using the ORM in VS 2008. What is the best way to export this to an SQL server so it will create the tables and relations on SQL Server? Thanks, JD

    Read the article

  • Employee Clocking in & Out System database

    - by user164577
    What would be the best database design for employee clocking and out? Right now I have two tables. Employee Base Table: Employee Id, relevant information like name and address, clocked in column Employee Clocked in Table: Employee id, clock in date, Clock in Time, Clocked Out Time. Is this a good way to track clocked in and clocked out? I appreciate any help

    Read the article

  • How to use LINQ for CRUD with a simple SQL table?

    - by Rob Ferno
    Every LINQ blog I found there seemed around 2 years old, I understand the syntax but need more direction on creating the SQL mapping and context classes. I just need to use LINQ for 2 SQL tables I have, nothing complicated. Do folks write the SQL mapping classes by hand for such cases or is there a decent tool for this? Can someone point me in the right direction?

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • import a text file into a temporary table using 'Load data infile' in a stored procedure- MySQL

    - by Pankaj
    I need to import a text file into a temporary table and from that select portions of it to insert in different tables. I wanted to use 'LOAD DATA INFILE'. Is there any way, i can use 'Load data infile' in a stored procedure. I am using mysql. LOAD DATA LOCAL INFILE 'C:\\MyData.txt' INTO TABLE tempprod fields terminated by ',' lines terminated by '\r\n'; SELECT * FROM product p;

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >