Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 329/1260 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Optimize a MySQL count each duplicate Query

    - by Onema
    I have the following query That gets the city name, city id, the region name, and a count of duplicate names for that record: SELECT Country_CA.City AS currentCity, Country_CA.CityID, globe_region.region_name, ( SELECT count(Country_CA.City) FROM Country_CA WHERE City LIKE currentCity ) as counter FROM Country_CA LEFT JOIN globe_region ON globe_region.region_id = Country_CA.RegionID AND globe_region.country_code = Country_CA.CountryCode ORDER BY City This example is for Canada, and the cities will be displayed on a dropdown list. There are a few towns in Canada, and in other countries, that have the same names. Therefore I want to know if there is more than one town with the same name region name will be appended to the town name. Region names are found in the globe_region table. Country_CA and globe_region look similar to this (I have changed a few things for visualization purposes) CREATE TABLE IF NOT EXISTS `Country_CA` ( `City` varchar(75) NOT NULL DEFAULT '', `RegionID` varchar(10) NOT NULL DEFAULT '', `CountryCode` varchar(10) NOT NULL DEFAULT '', `CityID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`City`,`RegionID`), KEY `CityID` (`CityID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; AND CREATE TABLE IF NOT EXISTS `globe_region` ( `country_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_code` char(2) COLLATE utf8_unicode_ci NOT NULL, `region_name` varchar(50) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`country_code`,`region_code`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; The query on the top does exactly what I want it to do, but It takes way too long to generate a list for 5000 records. I would like to know if there is a way to optimize the sub-query in order to obtain the same results faster. the results should look like this City CityID region_name counter sheraton 2349269 British Columbia 1 sherbrooke 2349270 Quebec 2 sherbrooke 2349271 Nova Scotia 2 shere 2349273 British Columbia 1 sherridon 2349274 Manitoba 1

    Read the article

  • Tool Compare the tables in two different databeses

    - by user191124
    I am using Toad. Frequently i need to compare tables in two different test environments. the tables present in them are same but the data differs. i just need to know what are the differences in the same tables which are in two different data bases.Are there any tools which can be installed on windows and use it to compare. Much appreciate your help:)

    Read the article

  • Rails, search item in different model?

    - by Danny McClelland
    Hi Everyone, I have a kase model which I am using a simple search form in. The problem I am having is some kases are linked to companies through a company model, and people through a people model. At the moment my search (in Kase model) looks like this: # SEARCH FACILITY def self.search(search) search_condition = "%" + search + "%" find(:all, :conditions => ['jobno LIKE ? OR casesubject LIKE ? OR transport LIKE ? OR goods LIKE ? OR comments LIKE ? OR invoicenumber LIKE ? OR netamount LIKE ? OR clientref LIKE ? OR kase_status LIKE ? OR lyingatlocationaddresscity LIKE ?', search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition, search_condition]) end What I am trying to work out, is what condition can I add to allow a search by Company or Person to show the cases they are linked to. @kase.company.companyname and company.companyname don't work :( Is this possible? Thanks, Danny

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • Rails advanced queries with join and sum calculation

    - by Dustin Brewer
    I have two models: companies and expenses. Companies have many expenses and expenses belong to companies. My expense model has an 'amount' column. I was wondering if there is a way to perform a find based on a date range and the amount column of the expenses. Something like top 3 companies by total expense amounts over a 7 day period. I've tried for the better part of the day to get this to work, I've attempted joins, chaining named scopes, raw sql, etc. and I'm not having any luck. Thanks for the help.

    Read the article

  • How to display SUM fields from a detailed table in a master table

    - by max
    What is the best approach to display the summery of DETAILED.Fields in its master table? E.g. I have a master table called 'BILL' with all the bill related data and a detailed table ('BILL_DETAIL') with the bill detailed related data, like NAME, PRICE, TAX, ... Now I want to list all BILLS, without the details, but with the sum of the PRICE and TAX stored in the detail table. Here is a simplified schema of that tables: TABLE BILL ---------- - ID - NAME - ADDRESS - ... TABLE BILL_DETAIL ----------------- - ID - BILLID - PORDUCT_NAME - PRICE - TAX - ... The retrieved table row should look like this: BILL.CUSTOMER_NAME, BILL.CUSTOMER_ADDRESS, sum(BILL_DETAIL.PRICE), sum(BILL.DETAIL.TAX), ... Any sugguestions?

    Read the article

  • Static DB Provider in ASP.NET MVC Causing Memory Leak

    - by user364685
    Hi, I have got an app I'm going to write in ASP.NET MVC and I want to create a DatabaseFactory object something like this:- public class DatabaseFactory { private string dbConn get { return <gets from config file>; } public IDatabaseTableObject GetDatabaseTable() { IDatabaseTableObject databaseTableObject = new SQLDatabaseObject(dbConn); return databaseTableObject; } } and this works fine, but I obviously have to instantiate the DatabaseFactory in every controller that needs it. If I made this static, so I could, in theory just call DatabaseFactory.GetDatabaseTable() it would cause a memory leak, wouldn't it?

    Read the article

  • Syntax for "RETURNING" clause in Mysql PDO

    - by dmontain
    I'm trying to add a record, and at the same time return the id of that record added. I read it's possible to do it with a RETURNING clause. $stmt->prepare("INSERT INTO tablename (field1, field2) VALUES (:value1, :value2) RETURNING id"); but the insertion fails when I add RETURNING. There is an auto-incremented field called id in the table being added to. Can someone see anything wrong with my syntax? or maybe PDO does not support RETURNING?

    Read the article

  • using dummy row with NOT NULL to solve DEFAULT NULL

    - by Tony38
    I know having DEFAULT NULLS is not a good practice but I have many optional lookup values which are FK in the system so to solve this issue here is what i am doing: I use NOT NULL for every FK / lookup colunms. I have the first row in every lookup table which is PK id = 1 as a dummy row with just "none" in all the columns. This way I can use NOT NULL in my schema and if needed reference to the none row values PK =1 for FKs which do not have any lookup value. Is this a good design or any other work arounds? EDIT: I have: Neighborhood table Postal table. Every neighborhood has a city, so the FK can be NOT NULL. But not every postal code belongs to a neighborhood. Some do, some don't depending on the country. So if i use NOT NULL for the FK between postal and neighborhood then I will be screwed as there has to be some value entered. So what i am doing in essence is: have a row in every table to be a dummy row just to link the FKs. This way row one in neighborhood table will be: n_id = 1 name =none etc... In postal table I can have: postal_code = 3456A3 FK (city) = Moscow FK (neighborhood_id)=1 as a NOT NULL. If I don't have a dummy row in the neighborhood lookup table then I have to declare FK (neighborhood_id) as a Default null column and store blanks in the table. This is an example but there is a huge number of values which will have blanks then in many tables.

    Read the article

  • MSSQL 2005: Rename DB Server Instance Name?

    - by Code Sherpa
    Hi, Can somebody tell me how to rename the DB server instance name and a DB name in MSSQL 2005? Right Now I Have SERVER/OLDNAME -- oldnameDB I want to change the server instance and also change the db name. I have tried: EXEC sp_renamedb 'oldName', 'newName' and that has changed the dbname as it appers in the tree directory. But, when I do "select @@servername" it is the old name. Also, the MDF and LDF files are still the old name. How do change instance and db names as a clean sweep across the server? Thanks.

    Read the article

  • PHP mySQL - replace some string inside string

    - by apis17
    i want to replace ALL comma , into ,<space> in all address table in my mysql table. For example, +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1,Street Name | +----------------+----------------+ Into +----------------+----------------+ | Name | Address | +----------------+----------------+ | Someone name | A1, Street Name| +----------------+----------------+ Thanks in advance.

    Read the article

  • Detecting changes between rows with same ID

    - by Noah
    I have a table containing some names and their associated ID, along with a snapshot: snapshot, id, name I need to identify when a name has changed for an id between snapshots. For example, in the following data: 1, 0, 'MOUSE_SPEED' 1, 1, 'MOUSE_POS' 1, 2, 'KEYBOARD_STATE' 2, 0, 'MOUSE_BUTTONS' 2, 1, 'MOUSE_POS' 2, 2, 'KEYBOARD_STATE' ...the meaning of id 0 changed with snapshot 2, but the others remained the same. I'd like to construct a query that (ideally) returns: 1, 0, 'MOUSE_SPEED' 2, 0, 'MOUSE_BUTTONS' I am using PostgreSQL v8.4.2.

    Read the article

  • How to structure the tables of a very simple blog in MySQL?

    - by Programmer
    I want to add a very simple blog feature on one of my existing LAMP sites. It would be tied to a user's existing profile, and they would be able to simply input a title and a body for each post in their blog, and the date would be automatically set upon submission. They would be allowed to edit and delete any blog post and title at any time. The blog would be displayed from most recent to oldest, perhaps 20 posts to a page, with proper pagination above that. Other users would be able to leave comments on each post, which the blog owner would be allowed to delete, but not pre-moderate. That's basically it. Like I said, very simple. How should I structure the MySQL tables for this? I'm assuming that since there will be blog posts and comments, I would need a separate table for each, is that correct? But then what columns would I need in each table, what data structures should I use, and how should I link the two tables together (e.g. any foreign keys)? I could not find any tutorials for something like this, and what I'm looking to do is really offer my users the simplest version of a blog possible. No tags, no moderation, no images, no fancy formatting, etc. Just a simple diary-type, pure-text blog with commenting by other users.

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • More efficient method for grabbing all child units

    - by Hazior
    I have a table in SQL that links to itself through parentID. I want to find the children and their children and so forth until I find all the child objects. I have a recursive function that does this but it seems very ineffective. Is there a way to get sql to find all child objects? If so how?

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >