Search Results

Search found 955 results on 39 pages for 'inserts'.

Page 4/39 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to get the Output value of SP using C#

    - by karthik
    I am using the following Code to execute the SP of MySql and get the output value. I need to get the output value to my c# after SP is executed. How ? Thanks. Code : public static string GetInsertStatement(string DBName, string TblName, string ColName, string ColValue) { string strData = ""; MySqlConnection conn = new MySqlConnection(ConfigurationSettings.AppSettings["Con_Admin"]); MySqlCommand cmd = conn.CreateCommand(); try { cmd.CommandType = System.Data.CommandType.StoredProcedure; cmd.CommandText = "InsGen"; cmd.Parameters.Clear(); cmd.Parameters.Add("in_db", MySqlDbType.VarChar, 20); cmd.Parameters["in_db"].Value = DBName; cmd.Parameters.Add("in_table", MySqlDbType.VarChar, 20); cmd.Parameters["in_table"].Value = TblName; cmd.Parameters.Add("in_ColumnName", MySqlDbType.VarChar, 20); cmd.Parameters["in_ColumnName"].Value = ColName; cmd.Parameters.Add("in_ColumnValue", MySqlDbType.VarChar, 20); cmd.Parameters["in_ColumnValue"].Value = ColValue; conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } catch (System.Exception e) { Console.WriteLine(e.Message); } return strData; } SP : DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen` ( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; select Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • Query to MySQL from c# returns System.Byte[]

    - by karthik
    I am using the below SP to return the value of Generated Insert statement and it works fine when executed in Query browser. When i try to get the value from C#, it give's me "System.Byte[]" as return value. When i try to get the value from MySql query browser, it give's me return value as : 'insert into admindb.accounts values("54321","2","karthik2","karthik2","1");' I guess the problem is with the single quotes of the returned value. Is it so ? DELIMITER $$ DROP PROCEDURE IF EXISTS `admindb`.`InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen`( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') as MyColumn from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • How to use AS keyword in MySql ?

    - by karthik
    In the below SP i will be getting result in One single column. How can i name the column of the output ? DELIMITER $$ DROP PROCEDURE IF EXISTS `InsGen` $$ CREATE DEFINER=`root`@`localhost` PROCEDURE `InsGen` ( in_db varchar(20), in_table varchar(20), in_ColumnName varchar(20), in_ColumnValue varchar(20) ) BEGIN declare Whrs varchar(500); declare Sels varchar(500); declare Inserts varchar(2000); declare tablename varchar(20); declare ColName varchar(20); set tablename=in_table; # Comma separated column names - used for Select select group_concat(concat('concat(\'"\',','ifnull(',column_name,','''')',',\'"\')')) INTO @Sels from information_schema.columns where table_schema=in_db and table_name=tablename; # Comma separated column names - used for Group By select group_concat('`',column_name,'`') INTO @Whrs from information_schema.columns where table_schema=in_db and table_name=tablename; #Main Select Statement for fetching comma separated table values set @Inserts=concat("select concat('insert into ", in_db,".",tablename," values(',concat_ws(',',",@Sels,"),');') from ", in_db,".",tablename, " where ", in_ColumnName, " = " , in_ColumnValue, " group by ",@Whrs, ";"); PREPARE Inserts FROM @Inserts; EXECUTE Inserts; END $$ DELIMITER ;

    Read the article

  • “Function” calling inside store procedure

    - by idimba
    Hi, I have a big store procedure, that contains a lot of INSERTs. There're many INSERTS that almost identical - they're different by some parameter(s) (all INSERTs to the same table) Is there a way to create a function/method, to which I'll pass the above parameter(s) and the function/method will generate concrete INSERT's? Thanks

    Read the article

  • Designing small comparable objects

    - by Thomas Ahle
    Intro Consider you have a list of key/value pairs: (0,a) (1,b) (2,c) You have a function, that inserts a new value between two current pairs, and you need to give it a key that keeps the order: (0,a) (0.5,z) (1,b) (2,c) Here the new key was chosen as the average between the average of keys of the bounding pairs. The problem is, that you list may have milions of inserts. If these inserts are all put close to each other, you may end up with keys such to 2^(-1000000), which are not easily storagable in any standard nor special number class. The problem How can you design a system for generating keys that: Gives the correct result (larger/smaller than) when compared to all the rest of the keys. Takes up only O(logn) memory (where n is the number of items in the list). My tries First I tried different number classes. Like fractions and even polynomium, but I could always find examples where the key size would grow linear with the number of inserts. Then I thought about saving pointers to a number of other keys, and saving the lower/greater than relationship, but that would always require at least O(sqrt) memory and time for comparison. Extra info: Ideally the algorithm shouldn't break when pairs are deleted from the list.

    Read the article

  • Can one connection get details of another? Or, how can I get the most detailed pending transaction

    - by bob-the-destroyer
    Is there a Mysql statement which provides full details of any other open connection or user? For this particular case, on myisam tables specifically. Looking at Mysql's SHOW TABLE STATUS documentation, it's missing some very important information for my purpose. For example: remote odbc connection one is inserting several thousand records, which due to a slow connection speed can take up to an hour. Tcp connection two, using PHP on the server's localhost, is running select queries with aggregate functions on that data. Before allowing connection two to run those queries, I'd like connection two to first check to make sure there's no pending inserts on any other connection on those specific tables so it can instead wait until all data is available. If the table is currently being written to, I'd like to spit back to the user of connection two an approximation of how much longer to wait based on the number of pending inserts. Ideally by table, I'd like to get back using a query the timestamp when connection one began the write, total inserts left to be done, and total inserts already completed. Instead of insert counts, even knowing number of bytes written and left to write would work just fine here. Obviously since connection two is a tcp connection via a PHP script, all I can really use in that script is some sort of query. I suppose if I have to, since it is on localhost, I can exec() it if the only way is by a mysql command line option that outputs this info, but I'd rather not. I suppose I could simply update a custom-made transaction log before and after this massive insert task which the PHP script can check, but hopefully there's already a built-in Mysql feature I can take advantage of.

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Powershell: Conditionally changing objects in the pipeline

    - by axk
    I'm converting a CSV to SQL inserts and there's a null-able text column which I need to quote in case it is not NULL. I would write something like the following for the conversion: Import-Csv data.csv | foreach { "INSERT INTO TABLE_NAME (COL1,COL2) VALUES ($($_.COL1),$($_.COL2));" >> inserts.sql } But I can't figure out how to add an additional tier into the pipeline to look if COL2 is not equal to 'NULL' and to quote it in such cases. How do I achieve such behavior?

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • LINQ to SQL SubmitChangess() progress

    - by Emir
    I'm using LINQ to SQLto import old DBF files into MSSQL. I'm reading all rows and adding them to database using ctx.MyTable.InsertOnSubmit(row) After reading phase is completed I have around 100 000 pending inserts. ctx.SubmitChanges() naturally is taking a long time. Is there any way to track progress of the ctx.submitchanges()? Can ctx.Log somehow be used for this purpose? Update: Is it possible to use ctx.GetChangeSet().Inserts.Count and track insert statements using the Log? Dividing ctx.SubmitChanges() into smaller chunks is not working for me, because I need transaction, all or nothing. Update 2: I've found nice class ActionTextWriter using which I will try to count inserts. http://damieng.com/blog/2008/07/30/linq-to-sql-log-to-debug-window-file-memory-or-multiple-writers

    Read the article

  • Do MyISAM holes get filled in automatically?

    - by NNN
    When you run a delete on a MyISAM table, it leaves a hole in the table until the table is optimized. This affects concurrent inserts. On the default setting, concurrent_inserts only work for tables without holes. However, in the documentation for MyISAM, under the concurrent_insert section it says: Enables concurrent inserts for all MyISAM tables, even those that have holes. For a table with a hole, new rows are inserted at the end of the table if it is in use by another thread. Otherwise, MySQL acquires a normal write lock and inserts the row into the hole. http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_concurrent_insert Does that mean MyISAM automatically fills in holes whenever a new row is insert into the table? Previously I thought the holes would not be fixed until you OPTIMIZE a table.

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • Inserting Multiple Records in SQL2000

    - by Chris M
    I have a web app that currently is inserting x (between 1 + 40) records into a table that contains about 5 fields, via a linq-2-sql-stored procedure in a loop. Would it be better to manually write the SQL Inserts to say a string builder and run them against the database when the loops completed rather than 30 transactions? or should I just accept this is negligible for such a small number of inserts.

    Read the article

  • PROBLEM LOADING IMAGES FROM DOM

    - by bappyiub
    I HAVE PROBLEM IN LOADING IMAGES USING JQUERY. MY PROGRAM IS SUCH THAT IT INSERTS ASWELL AS DELETES FROM THE SAME FORM. WHEN I DELETE AN IMAGE AND INSERTS THE IMAGE AND AFTER LOADING THE JQUERY FUNCTION THE DELETED IMAGE IS SHOWN. I HAVE FOUND INCOSTIENCY IN DOM AND ACTUAL LOACTION. THE BROWSER LOADS THE IMAGES FROM DOM NOT FORM ACTULA LOCATION. IS THERE ANY FUNCTION THAT WILL FORCE TO READ FROM DOM./ DELETE THE IMAGES IN DOM

    Read the article

  • What sql server isolation level should I choose to prevent concurrent reads?

    - by Brian Bolton
    I have the following transaction: SQL inserts a 1 new record into a table called tbl_document SQL deletes all records matching a criteria in another table called tbl_attachment SQL inserts multiple records into the tbl_attachment Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment. Would Read Committed Isolation be the correct isolation level?

    Read the article

  • How to distinguish between <expr> and non-<expr> mappings?

    - by ZyX
    I want to add a possibility of restoring mappings overwritten by my plugin. But the problem is that I cannot distinguish between the following mappings: inoremap <expr> @ test and inoremap @ test First mapping inserts the contents of the variable test, while second inserts text «test». Both mappings give maparg("@", 'i')=="test" and identical output of inoremap i.

    Read the article

  • primary key datatype in sql server database

    - by ooo
    i see after installing the asp.net membership tables, they use the data type "uniqueidentifier" for all of the primary key fields. I have been using "int" data type and doing increment by one on inserts. Is there any particular benefits to using the uniqueIdentifier data type compared to my current model of using int and auto increments on new inserts ?

    Read the article

  • how to activate trigger after all the bulk insert(s) in mysql

    - by JPro
    I am using mysql and there are bulk inserts that goes on to my table. My doubt is if I create a trigger specifying after insert, then the trigger will get activated for every insert after, which I do not want to happen. Is there any way to activate a trigger after all the bulk inserts are completed? Any advice? Thanks.

    Read the article

  • Lock innoDB table temporarily

    - by Industrial
    Hi everyone, I make bigger inserts consisting of a couple of thousand rows in my current web app and I would like to make sure that no one can do anything but read the table, until the inserts have been done. What is the best way to do this while keeping the read availability open for normal, non-admin users? Thanks!

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >