Search Results

Search found 1650 results on 66 pages for 'indexes'.

Page 6/66 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • Why is this postgresql query so slow?

    - by user315975
    I'm no database expert, but I have enough knowledge to get myself into trouble, as is the case here. This query SELECT DISTINCT p.* FROM points p, areas a, contacts c WHERE ( p.latitude > 43.6511659465 AND p.latitude < 43.6711659465 AND p.longitude > -79.4677941889 AND p.longitude < -79.4477941889) AND p.resource_type = 'Contact' AND c.user_id = 6 is extremely slow. The points table has fewer than 2000 records, but it takes about 8 seconds to execute. There are indexes on the latitude and longitude columns. Removing the clause concering the resource_type and user_id make no difference. The latitude and longitude fields are both formatted as number(15,10) -- I need the precision for some calculations. There are many, many other queries in this project where points are compared, but no execution time problems. What's going on?

    Read the article

  • SQL Drop Index on different Database

    - by Jim
    While trying to optimize SQL scripts, I was recommended to add indexes. What is the easiest way to specify what Database the index should be on? IF EXISTS (SELECT * FROM sysindexes WHERE NAME = 'idx_TableA') DROP INDEX TableA.idx_TableA IF EXISTS (SELECT * FROM sysindexes WHERE NAME = 'idx_TableB') DROP INDEX TableB.idx_TableB In the code aboce, TableA is in DB-A, and TableB is in DB-B. I get the following error when I change DROP INDEX TableA.idx_TableA to DROP INDEX DB-A.dbo.TableA.idx_TableA Msg 166, Level 15, State 1, Line 2 'DROP INDEX' does not allow specifying the database name as a prefix to the object name. Any thoughts are appreciated.

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • Moving from a non-clustered PK to a clustered PK in SQL 2005

    - by adaptr
    HI all, I recently asked this question in another thread, and thought I would reproduce it here with my solution: What if I have an auto-increment INT as my non-clustered primary key, and there are about 15 foreign keys defined to it ? (snide comment about original designer being braindead in the original :) ) This is a 15M row table, on a live database, SQL Standard, so dropping indexes is out of the question. Even temporarily dropping the foreign key constraints will be difficult. I'm curious if anybody has a solution that causes minimal downtime. I tested this in our testing environment and finally found that the downtime wasn't as severe as I had originally feared. I ended up writing a script that drops all FK constraints, then drops the non-clustered key, re-creates the PK as a clustered index, and finally re-created all FKs WITH NOCHECK to avoid trawling through all FKs to check constraint compliance. Then I just enable the CHECK constraints to enable constraint checking from that point onwards, and all is dandy :) The most important thing to realize is that during the time the FKs are absent, there MUST NOT be any INSERTs or DELETEs on the parent table, as this may break the constraints and cause issues in the future. The total time taken for clustering a 15M row, 800MB index was ~4 minutes :)

    Read the article

  • Binding collection indexes to a WPF DataGrid at runtime

    - by bearda
    After trying to develop our own control to display a table of data we stumbled on the WPF toolkit DataGrid and thought we were saved. A couple hours later I'm scratching my head trying to figure out if it can do what we really want it to do. The DataGrid seems to be based on displaying various properties of a single object, where I think I need something that can display a collection's items, I want to display a table of strings with a variable number of rows and a variable number of columns. Each row represents the state of a number of inputs at a given time. The number of samples may change, and the number of inputs may change at runtime. As a result, I can't create a custom object that represents a row with a property for each input. This makes the DataGrid binding more complicated, since I can't bind to a fixed property for each column. I thought I found a workaround for this by binding to ".[" + channelIndex + ]" but it hasn't worked out quite the way I wanted. Right now I have a collection of a collection of strings that represents the two-dimensional array of strings. I'm using ObservableCollection so string change event notifications work, but I can use any collection type needed. It looked like everything was working great until I realized that every row was showing the same line of data. Binding in C# is not my strong suit, so I'm kind of lost on what's going wrong. Is what I'm trying to do overall even possible with the WPF Toolkit DataGrid? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using Microsoft.Windows.Controls; using System.ComponentModel; using System.Collections; using System.Collections.ObjectModel; namespace WRE.NGDAS.UILayer { /// <summary> /// Interaction logic for DataTableControl.xaml /// </summary> public partial class DataTableControl : UserControl { #region Constants private const int lineHeight = 25; #endregion #region Type Definitions private class TableChannelInfo { internal string ChannelName = string.Empty; internal int Column = 0; public override string ToString() { return ChannelName; } } internal class NotifyString : INotifyPropertyChanged { private string text = String.Empty; public event PropertyChangedEventHandler PropertyChanged; public string Text { get { return text; } set { text = value; NotifyPropertyChanged( "Text" ); } } public NotifyString(string newText) { text = newText; } protected void NotifyPropertyChanged( string propertyChanged ) { if ( PropertyChanged != null ) { PropertyChanged(this, new PropertyChangedEventArgs(propertyChanged)); } } } #endregion #region Private Data Members string[] channels = null; int numberFixed = 0; int numberOfRows = 0; ObservableCollection<ObservableCollection<NotifyString>> tableText = null; #endregion public DataTableControl( List<string> channelNames, List<string> nameToolTips, int numberFixedColumns, int numberRows ) { InitializeComponent(); numberFixed = numberFixedColumns; numberOfRows = numberRows; tableText = new ObservableCollection<ObservableCollection<NotifyString>>(); dataGrid.ItemsSource = tableText; channels = new string[channelNames.Count]; for (int channelIndex = 0; channelIndex < channelNames.Count; channelIndex++) { channels[channelIndex] = channelNames[channelIndex]; tableText.Add(new ObservableCollection<NotifyString>()); for (int i = 0; i < numberOfRows; i++) { tableText[channelIndex].Add( new NotifyString(String.Empty) ); } Binding textBinding = new Binding( ".[" + channelIndex + "]" ); textBinding.Mode = BindingMode.OneWay; textBinding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged; textBinding.Source = tableText[channelIndex]; DataGridTextColumn newColumn = new DataGridTextColumn(); newColumn.Header = channelNames[channelIndex]; newColumn.DisplayIndex = channelIndex; newColumn.Binding = textBinding; if ( channelIndex < numberFixed ) { newColumn.CanUserReorder = false; } dataGrid.Columns.Add(newColumn); } //Height = lineHeight + numberOfRows * lineHeight; } internal void UpdateChannelRow( int rowNumber, List<string> channelValues, DisplayColors color ) { if ( rowNumber < numberOfRows ) { for (int column = 0; column < channelValues.Count; column++) { if ( column < channels.GetLength(0) ) { tableText[column][rowNumber].Text = channelValues[column]; } } } } } }

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • Entity Framework use of indexes and foreign keys

    - by David
    I want to be able to search a table quite quickly using the Entity Framework, say if I have a Contacts table, a JobsToDo table and a matrix table linking the two tables e.g Contacts_JobsToDo_Mtx and I specify two foreign keys in the Contacts_JobsToDo_Mtx table, if I wanted to search this Mtx table, do I need to specify an index on the two foreign keys? Or by the fact that they are two foreign keys are they considered indexed on them anyway? Will the Entity Framework be able to search through the Mtx table quickly without having to specfiy an index on both keys? Thanks!

    Read the article

  • SimpleRepository auto migrations with indexes

    - by scott
    I am using subsonic simplerepo with migrations in dev and it makes things pretty easy but I keep running into issues with my nvarchar columns that have an index. My users table has an index defined on the username column for obvious reasons but each time I start the project subsonic is doing this: ALTER TABLE [Users] ALTER COLUMN Username nvarchar(50); which causes this: The index 'IX_Username' is dependent on column 'Username'.ALTER TABLE ALTER COLUMN Username failed because one or more objects access this column Is there any way around this issue?

    Read the article

  • sql server 2005 indexes and low cardinality

    - by Peanut
    How does SQL Server determine whether a table column has low cardinality? The reason I ask is because query optimizer would most probably not use an index on a gender column (values 'm' and 'f'). However how would it determine the cardinality of the gender column to come to that decision? On top of this, if in the unlikely event that I had a million entries in my table and only one entry in the gender column was 'm', would SQL server be able to determine this and use the index to retrieve that single row? Or would it just know there are only 2 distinct values in the column and not use the index? I appreciate the above discusses some poor db design, but I'm just trying to understand how query optimizer comes to its decisions. Many thanks.

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • Python sys.argv lists and indexes

    - by Fred Gerbig
    In the below code I understand that sys.argv uses lists, however I am not clear on how the index's are used here. def main(): if len(sys.argv) >= 2: name = sys.argv[1] else: name = 'World' print 'Hello', name if __name__ == '__main__': main() If I change name = sys.argv[1] to name = sys.argv[0] and type something for an argument it returns: Hello C:\Documents and Settings\fred\My Documents\Downloads\google-python-exercises \google-python-exercises\hello.py Which kind of make sense. Can someone explain how the 2 is used here: if len(sys.argv) >= 2: And how the 1 is used here: name = sys.argv[1]

    Read the article

  • Oracle Unique Indexes

    - by Melvin
    I was creating a new table today in 10g when I noticed an interesting behavior. Here is an example of what I did: CREATE TABLE test_table ( field_1 INTEGER PRIMARY KEY ); Oracle will by default, create a non-null unique index for the primary key. I double checked this. After a quick check, I find a unique index name SYS_C0065645. Everything is working as expected so far. Now I did this: CREATE TABLE test_table ( field_1 INTEGER, CONSTRAINT pk_test_table PRIMARY KEY (field_1) USING INDEX (CREATE INDEX idx_test_table_00 ON test_table (field_1))); After describing my newly created index idx_test_table_00, I see that it is non-unique. I tried to insert duplicate data into the table and was stopped by the primary key constraint, proving that the functionality has not been affected. It seems strange to me that Oracle would allow a non-unique index to be used for a primary key constraint. Why is this allowed?

    Read the article

  • Multi-variable indexes in postgres

    - by Jackson Davis
    Im looking at an application where I will be doing quite a few SELECTs where I am trying to find column_a = x AND column_b = y. Is the correct to create that index that something like the following? CREATE INDEX index_name ON table (column_a, column_b)

    Read the article

  • A question about indexes regarding to the gain of inserts & updates in database

    - by Mestika
    Hi, I’m having a question about the fine line between the gain of an index to a table there is growing steadily in size every month and the gain of queries with an index. The situation is, that I’ve two tables, Table1 and Table2. Each table grows slowly but regularly each month (with about 100 new rows for Table1 and a couple of rows for Table2). My concrete question is whether to have an index or to drop it. I’ve made some measurement that an covering index on Table2 improve my SELECT queries and some rather much but again, I’ve to consider the pros and cons but having a really hard time to decide. For Table1 it might not be necessary to have an index because the SELECT queries there is not that common. I would appreciate any suggestion, tips or just good advice to what is a good solution. By the way, I’m using IBM DB2 version 9.7 as my Database system Sincerely Mestika

    Read the article

  • Iterating dictionary indexes in django templates

    - by unclaimedbaggage
    Hi folks...I have a dictionary with embedded objects, which looks something like this: notes = { 2009: [<Note: Test note>, <Note: Another test note>], 2010: [<Note: Third test note>, <Note: Fourth test note>], } I'm trying to access each of the note objects inside a django template, and having a helluva time navigating to them. In short, I'm not sure how to extract by index in django templating. Current template code is: <h3>Notes</h3> {% for year in notes %} {{ year }} # Works fine {% for note in notes.year %} {{ note }} # Returns blank {% endfor %} {% endfor %} If I replace {% for note in notes.year %} with {% for note in notes.2010 %} things work fine, but I need that '2010' to be dynamic. Any suggestions much appreciated.

    Read the article

  • Duplicate ID/indexes and looping

    - by Justin Alexander
    I realize having two elements in the same html doc with the same ID is wrong, bad, immoral, and will lead to global warming. But... I'm trying to write an XSS widgit, so I really have no control over the quality of the parent web page. I loop through document.images to retrieve a list of images on the page. I perform an action on each one. for(img in document.images){ ... } i've also tried for(var i=0;i<document.images.length;i++){ ... } in both cases it allows me to loop through all of the elements, BUT when trying trying to reference an object with a duplicate ID, I always get the first (in order of the html). When using debugger in IE8 i'm able to see that both elements ARE listed, but that they both have the same index (in IE the index of the document.images is either sequential or matches the image ID) Does anyone have a better solution?

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • boost multi_index partial indexes

    - by Gokul
    Hi, I want to implement inside boost multi-index two sets of keys with same search criteria but different eviction criteria. Say i have two sets of data with same search condition, but one set needs a MRU(Most Recently Used) list of 100 and the other set requires a MRU of 200. Say the entry is like this class Student { int student_no; char sex; std::string address; }; The search criteria is student_no, but for sex='m', we need MRU of 200 and for sex='f', we need a MRU of 100. Now i have a solution where in i introduce a new ordered index to maintain ordering. For example the IndexSpecifierList will be something like this typedef multi_index_container< Student, indexed_by< ordered_unique< member<Student, int, &Student::student_no> >, ordered_unique< composite_key< member<Student, char, &Student::sex>, member<Student, int, &Student::sex_specific_student_counter> > > > > student_set Now everytime, i am inserting a new one, i have to take a equal_range for that using index 2 and remove the oldest one and if something is getting re-used, i have to update it by incrementing the counter. Is there a better solution to this kind of problem? Thanks, Gokul.

    Read the article

  • mysql join on two indexes takes long time!!

    - by Alaa
    Hi All I have a custom query in dripal, this query is: select count(distinct B.src) from node A, url_alias B where concat('node/',A.nid)= B.src; now, nid in node is primary key and i have made src as an index in url_alias table. after waiting for more than a minute i got this: +-----------------------+ | count(distinct B.src) | +-----------------------+ | 325715 | +-----------------------+ 1 row in set (1 min 24.37 sec) now my question is: why did this query take this long, and how to optimize it?? Thanks for your help

    Read the article

  • Do I need to manually create indexes for a DBIx::Class belongs_to relationship

    - by Dancrumb
    I'm using the DBIx::Class modules for an ORM approach to an application I have. I'm having some problems with my relationships. I have the following package MySchema::Result::ClusterIP; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster_ip'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->belongs_to( 'cluster' => 'MySchema::Result::Cluster', { 'foreign.config_key' => 'self.config_key', 'foreign.id' => 'self.cluster_id' } ); As well as package MySchema::Result::Cluster; use strict; use warnings; use base qw/DBIx::Class::Core/; our $VERSION = '1.0'; __PACKAGE__->load_components(qw/InflateColumn::Object::Enum Core/); __PACKAGE__->table('cluster'); __PACKAGE__->add_columns( # Columns here ); __PACKAGE__->set_primary_key('objkey'); __PACKAGE__->belongs_to( 'configuration' => 'MySchema::Result::Configuration', 'config_key'); __PACKAGE__->has_many('cluster_ip' => 'MySchema::Result::ClusterIP', { 'foreign.config_key' => 'self.config_key', 'foreign.cluster_id' => 'self.id' }); There are a couple of other modules, but I don't believe that they are relevant. When I attempt to deploy this schema, I get the following error: DBIx::Class::Schema::deploy(): DBI Exception: DBD::mysql::db do failed: Can't create table 'test.cluster_ip' (errno: 150) [ for Statement "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENCES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`config_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB"] at test_deploy.pl line 18 (running "CREATE TABLE `cluster_ip` ( `objkey` smallint(5) unsigned NOT NULL auto_increment, `config_key` smallint(5) unsigned NOT NULL, `cluster_id` char(16) NOT NULL, INDEX `cluster_ip_idx_config_key_cluster_id` (`config_key`, `cluster_id`), INDEX `cluster_ip_idx_config_key` (`config_key`), PRIMARY KEY (`objkey`), CONSTRAINT `cluster_ip_fk_config_key_cluster_id` FOREIGN KEY (`config_key`, `cluster_id`) REFERENC ES `cluster` (`config_key`, `id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `cluster_ip_fk_config_key` FOREIGN KEY (`config_key`) REFERENCES `configuration` (`conf ig_key`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB") at test_deploy.pl line 18 From what I can tell, MySQL is complaining about the FOREIGN KEY constraint, in particular, the REFERENCE to (config_key, id) in the cluster table. From my reading of the MySQL documentation, this seems like a reasonable complaint, especially in regards to the third bullet point on this doc page. Here's my question. Am I missing something in the DBIx::Class module? I realize that I could explicitly create the necessary index to match up with this foreign key constraint, but that seems to be repetitive work. Is there something I should be doing to make this occur implicitly?

    Read the article

  • How to add indexes to MySQL tables?

    - by Michael
    I've got a very large MySQL table with about 150,000 rows of data. Currently, when I try and run SELECT * FROM table WHERE id = '1'; the code runs fine as the ID field is the primary index. However, recently for a development in the project, I have to search the database by another field. For example SELECT * FROM table WHERE product_id = '1'; This field was not previously indexed, however, I've added it as an index but when I try to run the above query, the results is very slow. An EXPLAIN query reveals that there is no index for the product_id field when I've already added one and as a result the query takes any where from 20 minutes to 30 minutes to return a single row. EDIT: My full EXPLAIN results are: | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+ | 1 | SIMPLE | table | ALL | NULL | NULL | NULL | NULL | 157211 | Using where | +----+-------------+-------+------+----------------------+------+---------+------+------+------------------+

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >