Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 4/398 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • SQL Server 2000 tables

    - by klork
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • update query on multiple tables

    - by jon
    I have a schema like : employees (eno, ename, zip, hdate) customers (cno, cnmae, street, zip, phone) zipcodes (zip, city) where zip is pk in zipcodes and fk in other tables. I have to write an update query which updates all the occurence of zipcode 4994 to 1234 throughout the database. update zipcodes,customers,employees set zip = 0 where customers.zip = zipcodes.zip and employees.zip = zipcodes.zip; but i know i am not doing it right. Is there a way to update all the tables zip ina single update query?

    Read the article

  • CakePHP Accessing Dynamically Created Tables?

    - by Dave
    As part of a web application users can upload files of data, which generates a new table in a dedicated MySQL database to store the data in. They can then manipulate this data in various ways. The next version of this app is being written in CakePHP, and at the moment I can't figure out how to dynamically assign these tables at runtime. I have the different database config's set up and can create the tables on data upload just fine, but once this is completed I cannot access the new table from the controller as part of the record CRUD actions for the data manipulate. I hoped that it would be along the lines of function controllerAction(){ $this->uses[] = 'newTable'; $data = $this->newTable->find('all'); //use data } But it returns the error Undefined property: ReportsController::$newTable Fatal error: Call to a member function find() on a non-object in /app/controllers/reports_controller.php on line 60 Can anyone help.

    Read the article

  • SubSonic isn't generating MySql foreign key tables

    - by keith
    I two tables within a MySql 5.1.34 database. When using SubSonic to generate the DAL, the foreign-key relationship doesn't get scripted, ie; I have no Parent.ChildCollection object. Looking inside the generated DAL Parent class shows the following; //no foreign key tables defined (0) I have tried SubSonic 2.1 and 2.2, and various MySql 5 versions. I must be doing something wrong procedurally - any help would be greatly appreciated. This has always just worked 'out-the-box' when using MS-SQL. TABLE `parent` ( `ParentId` INT(11) NOT NULL AUTO_INCREMENT, `SomeData` VARCHAR(25) DEFAULT NULL, PRIMARY KEY (`ParentId`) ) ENGINE=INNODB DEFAULT CHARSET=latin1; TABLE `child` ( `ChildId` INT(11) NOT NULL AUTO_INCREMENT, `ParentId` INT(11) NOT NULL, `SomeData` VARCHAR(25) DEFAULT NULL, PRIMARY KEY (`ChildId`), KEY `FK_child` (`ParentId`), CONSTRAINT `FK_child` FOREIGN KEY (`ParentId`) REFERENCES `parent` (`ParentId`) ) ENGINE=INNODB DEFAULT CHARSET=latin1;

    Read the article

  • CakePHP: Using two tables for a single model

    - by mwaterous
    I'm just picking up development in CakePHP right now so forgive me if this seems obvious; it did to me when I first read about has, belongsTo, hasMany, etc. The problem is I would like to associate two tables with a single model, and was wondering if there was a way to configure this so that when CakePHP did it's queries it automatically performed a join on the two tables. I don't want to create a separate model for the second table as it is merely a meta information table - the master table will contain the primary information required, the meta table will be populated with secondary information that is not required and therefore may or may not be set for every row of the master table.

    Read the article

  • Combining database tables

    - by zSysop
    Hi all, I have two tables which look kind of similar and i was thinking about combining them and thought i would get some input from everyone. Here's what they currently look like: Issues Id | IssueCategory | IssueType | Status | etc.. ------------------------------------------------- 123 | Copier | Broken | Open | 124 | Hardware | Missing | Open | CopierIssueDetails Id | IssueId | SerialNumber | Make | Model | TonerNumber | LastCount --------------------------------------------------------------------- 1 | 123 | W12134 | Dell | X1234 | 12344555 | 500120 HardwareTicketDetails Id | IssueId | EquipmentNumber | Make | Model | Location | Toner | Monitor | Mouse ----------------------------------------------------------------------------------- 1 | 124 | X1123113 | Dell | XXXX | 1st floor | 0 | 1 | 0 What do you guys think about combining these two tables into one. Would it be a good idea or is it better to keep them separated like this? Thanks in advance for any suggestions.

    Read the article

  • SQL: joining multiples tables into one.

    - by Graveen
    I have 4 tables. r1, r2, r3 and r4. The table columns are the following: rId | rName I want to have, in fine, an unique table - let's call it R. Obviously, R will have the following structure: rTableName | rId | rName I'm looking for a solution, and the more natural for me is to: add a single column to all rX insert this column the table name i'm processing generate SQLs and concatenate them all Although I see exactly how to perform 1 and 3 with batching, editing, etc... (I have only to perform it once and for all), I don't see how to do the point 2: self-getting the tablename to insert into SQL. Have you an idea / or a different way to do that could solve my problem? Note: In fact, there are 250+ rX tables. That's why i can't do this manually. Note2: Precisely, this is with MySQL.

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • Look-up Tables in SQL

    Lookup tables can be a force for good in a relational database. Whereas the 'One True Lookup Table' remains a classic of bad database design, an auxiliary table that holds static data, and is used to lookup values, still has powerful magic. Joe Celko explains.... NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • Mysql - Help me change this single complex query to use temporary tables

    - by sandeepan-nath
    About the system: - There are tutors who create classes and packs - A tags based search approach is being followed.Tag relations are created when new tutors register and when tutors create packs (this makes tutors and packs searcheable). For details please check the section How tags work in this system? below. Following is the concerned query Can anybody help me suggest an approach using temporary tables. We have indexed all the relevant fields and it looks like this is the least time possible with this approach:- SELECT SUM(DISTINCT( t.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%Dictatorship%" )) AS key_1_total_matches , SUM(DISTINCT( t.tag LIKE "%democracy%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%democracy%" )) AS key_2_total_matches , COUNT(DISTINCT( od.id_od )) AS tutor_popularity, CASE WHEN ( IF(( wc.id_wc > 0 ), ( wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'classes_published' , CASE WHEN ( IF(( lp.id_lp > 0 ), ( lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ), 0) ) THEN 1 ELSE 0 END AS 'packs_published', td . *, u . * FROM tutor_details AS td JOIN users AS u ON u.id_user = td.id_user LEFT JOIN learning_packs_tag_relations AS lptagrels ON td.id_tutor = lptagrels.id_tutor LEFT JOIN learning_packs AS lp ON lptagrels.id_lp = lp.id_lp LEFT JOIN learning_packs_categories AS lpc ON lpc.id_lp_cat = lp.id_lp_cat LEFT JOIN learning_packs_categories AS lpcp ON lpcp.id_lp_cat = lpc.id_parent LEFT JOIN learning_pack_content AS lpct ON ( lp.id_lp = lpct.id_lp ) LEFT JOIN webclasses_tag_relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN webclasses AS wc ON wtagrels.id_wc = wc.id_wc LEFT JOIN learning_packs_categories AS wcc ON wcc.id_lp_cat = wc.id_wp_cat LEFT JOIN learning_packs_categories AS wccp ON wccp.id_lp_cat = wcc.id_parent LEFT JOIN order_details AS od ON td.id_tutor = od.id_author LEFT JOIN orders AS o ON od.id_order = o.id_order LEFT JOIN tutors_tag_relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN tags AS t ON t.id_tag = ttagrels.id_tag LEFT JOIN tags AS tt ON tt.id_tag = lptagrels.id_tag LEFT JOIN tags AS ttt ON ttt.id_tag = wtagrels.id_tag WHERE ( u.country = 'IE' OR u.country IN ( 'INT' ) ) AND CASE WHEN ( ( tt.id_tag = lptagrels.id_tag ) AND ( lp.id_lp > 0 ) ) THEN lp.id_status = 1 AND lp.published = 1 AND lpcp.status = 1 AND ( lpcp.country_code = 'IE' OR lpcp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( ( ttt.id_tag = wtagrels.id_tag ) AND ( wc.id_wc > 0 ) ) THEN wc.wc_api_status = 1 AND wc.wc_type = 0 AND wc.class_date > '2010-06-01 22:00:56' AND wccp.status = 1 AND ( wccp.country_code = 'IE' OR wccp.country_code IN ( 'INT' ) ) ELSE 1 END AND CASE WHEN ( od.id_od > 0 ) THEN od.id_author = td.id_tutor AND o.order_status = 'paid' AND CASE WHEN ( od.id_wc > 0 ) THEN od.can_attend_class = 1 ELSE 1 END ELSE 1 END AND ( t.tag LIKE "%Dictatorship%" OR t.tag LIKE "%democracy%" OR tt.tag LIKE "%Dictatorship%" OR tt.tag LIKE "%democracy%" OR ttt.tag LIKE "%Dictatorship%" OR ttt.tag LIKE "%democracy%" ) GROUP BY td.id_tutor HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 ORDER BY tutor_popularity DESC, u.surname ASC, u.name ASC LIMIT 0, 20 The problem The results returned by the above query are correct (AND logic working as per expectation), but the time taken by the query rises alarmingly for heavier data and for the current data I have it is like 10 seconds as against normal query timings of the order of 0.005 - 0.0002 seconds, which makes it totally unusable. Somebody suggested in my previous question to do the following:- create a temporary table and insert here all relevant data that might end up in the final result set run several updates on this table, joining the required tables one at a time instead of all of them at the same time finally perform a query on this temporary table to extract the end result All this was done in a stored procedure, the end result has passed unit tests, and is blazing fast. I have never worked with temporary tables till now. Only if I could get some hints, kind of schematic representations so that I can start with... Is there something faulty with the query? What can be the reason behind 10+ seconds of execution time? How tags work in this system? When a tutor registers, tags are entered and tag relations are created with respect to tutor's details like name, surname etc. When a Tutors create packs, again tags are entered and tag relations are created with respect to pack's details like pack name, description etc. tag relations for tutors stored in tutors_tag_relations and those for packs stored in learning_packs_tag_relations. All individual tags are stored in tags table. The explain query output:- Please see this screenshot - http://www.test.examvillage.com/Explain_query_improved.jpg

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Azure Tables or SQL Azure?

    - by Phil Wright
    I am at the planning stage of a web application that will be hosted in Azure with ASP.NET for the web site and Silverlight within the site for a rich user experience. Should I use Azure Tables or SQL Azure for storing my application data?

    Read the article

  • Latex: Listing all figures (tables, algorithm) once again at the end of the document

    - by Zlatko
    Hi all, I have been writhing a rather large document with latex. Now I would like to list all the figures / tables / algortihms once again at the end of the file so that I can check if they all look the same. For example, if every algorithm has the same notation. How can I do this? I know about \listofalgorithms and \listoffigures but they only list the names of the algorithms or figures and the pages where they are. Thanks.

    Read the article

  • How to create a lookup column that targets a Doc Lib and uses the 'Name' of the document?

    - by stlawrence
    How do you create a lookup column to a Document Library that uses the 'Name' of the document as the lookup value? I found a blog post that recommends adding another custom field like "FileName" and then using a item reciever to populate the custom field with the value from the Name field but that seems cheesy. Link to the blog in case people are interested: http://blogs.msdn.com/pranab/archive/2008/01/08/sharepoint-2007-moss-wss-issue-with-lookup-column-to-doc-lib-name-field.aspx I've got a bunch of custom document content types that I dont want to clutter with a work around that should really work anyway.

    Read the article

  • Mysql insert into 2 tables

    - by Spidfire
    I want to make a insert into 2 tables visits: visit_id int | card_id int registration: registration_id int | type enum('in','out') | timestamp int | visit_id int i want something like: INSERT INTO `visits` as v ,`registration` as v (v.`visit_id`,v.`card_id`,r.`registration_id`, r.`type`, r.`timestamp`, r.`visit_id`) VALUES (NULL, 12131141,NULL, UNIX_TIMESTAMP(), v.`visit_id`); I wonder if its possible

    Read the article

  • Microsoft SQL Server 2005 Inserting into tables from child procedure which returned multiple tables

    - by Kevin
    I've got a child procedure which returns more than table. child: PROCEDURE KevinGetTwoTables AS BEGIN SELECT 'ABC' Alpha, '123' Numeric SELECT 'BBB' Alpha, '123' Numeric1, '555' Numeric2 END example: PROCEDURE KevinTesting AS BEGIN DECLARE @Table1 TABLE ( Alpha varchar(50), Numeric1 int ) DECLARE @Table2 TABLE ( Alpha varchar(50), Numeric1 int, Numeric2 int ) --INSERT INTO @Table1 EXEC KevinGetTwoTables END

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Join performance on MyISAM and InnoDB tables

    - by j0nes
    I am thinking about converting some tables from MyISAM to InnoDB in my mysql server. The tables will certainly benefit from the change because a lot of write requests come to these tables, while there are also quite a lot of read request at the same time. However, they are often joined together with some tables that almost don't get any writes. Is there a performance penalty when joining together MyISAM and InnoDB tables or should everything work fine? Second question: During backups at night, I am copying data from the InnoDB tables to MyISAM tables for archiving purposes. In these backups, a lot of write-requests happen, however there is almost no read from these archive tables. Would these tables also benefit from using InnoDB or is this just a waste of space and RAM?

    Read the article

  • Concatenating rows from different tables into one field

    - by Markus
    Hi! In a project using a MSSQL 2005 Database we are required to log all data manipulating actions in a logging table. One field in that table is supposed to contain the row before it was changed. We have a lot of tables so I was trying to write a stored procedure that would gather up all the fields in one row of a table that was given to it, concatenate them somehow and then write a new log entry with that information. I already tried using FOR XML PATH and it worked, but the client doesn't like the XML notation, they want a csv field. Here's what I had with FOR XML PATH: DECLARE @foo varchar(max); SET @foo = (SELECT * FROM table WHERE id = 5775 FOR XML PATH('')); The values for "table", "id" and the actual id (here: 5775) would later be passed in via the call to the stored procedure. Is there any way to do this without getting XML notation and without knowing in advance which fields are going to be returned by the SELECT statement?

    Read the article

  • PHP/MySQL - updateing 2 tables in one request

    - by Phil Jackson
    Morning, I want to learn more about sql and I'm wanting to update to tables; $query3 = "INSERT INTO `$table1`, `$table2` ($table1.DISPLAY_NAME, $table1.EMAIL_ACCOUNT, $table2.DISPLAY_NAME, $table2.EMAIL_ACCOUNT) values ('" . DISPLAY_NAME . "', '" . EMAIL_ADDRESS . "', '" . $get['rn'] . "', '" . $email . "')"; could some one point me in the right direction on how I would go about this? current error is You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' contacts_ACT_Web_Designs (contacts_E_Jackson.DISPLAY_NAME, contacts_E_Jackson' at line 1 regards, phil

    Read the article

  • MySQL – Beginning Temporary Tables in MySQL

    - by Pinal Dave
    MySQL supports Temporary tables to store the resultsets temporarily for a given connection. Temporary tables are created with the keyword TEMPORARY along with the CREATE TABLE statement. Let us create the temporary table named Temp CREATE TEMPORARY TABLE TEMP (id INT); Now you can find out the column names using DESC command DESC TEMP; The above returns the following result This table can be accessed only for the current connection and it can be used like a permanent table and automatically dropped when the connection is closed. However, you can not find temporary tables using INFORMATION_SCHEMA. TABLES system view. It will only list out the permanent tables. MySQL usually stores the data of temporary tables in memory and processed by Memory Storage engine. But if the data size is too large MySQL automatically converts this to the on – disk table and use MyISAM engine. You can also create a permanent table with the same name of a temporary table in the same connection. However the structure of permanent table is visible only if the temporary table with the same name is dropped. Let us create a permanent table with the same name Temp as below CREATE TABLE TEMP (id INT, names VARCHAR(100)); Now running the following command stills gives you the structure of the temporary table temp created earlier. DESC TEMP; You can drop the temporary table using DROP TEMPORARY TABLE command; DROP TEMPORARY TABLE TEMP; After you executed the temporary table, run the following command DESC TEMP; Now you will see the structure of the permanent table named temp In summary – If there is a Temporary Table in MySQL it gets first priority over the permanent table in the session. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >