Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 648/1448 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • MySQL table data transformation -- how can I dis-aggregate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • How to configure oracle instantclient for mono?

    - by funwithcoding
    Mono is really awesome. Some of our applications worked in linux out of the box even without recompiling the binary. However I am having tough time to configure oracle instantclient to use it with mono. I installed instantclient on a CentOS VM(by installing instantclient rpm) but however I did not find TNSNAMES.ORA anywhere. I searched for oracle and I found the following path contains the oracle libraries. [root@bagvapp rupert]# ll /usr/lib/oracle/11.2/client/lib/ total 143280 -rw-r--r-- 1 root root 7456 Aug 14 2009 cobsqlintf.o -rw-r--r-- 1 root root 342 Aug 14 2009 glogin.sql lrwxrwxrwx 1 root root 17 Mar 9 06:52 libclntsh.so -> libclntsh.so.11.1 -rw-r--r-- 1 root root 40088477 Aug 14 2009 libclntsh.so.11.1 -rw-r--r-- 1 root root 6986848 Aug 14 2009 libnnz11.so lrwxrwxrwx 1 root root 15 Mar 9 06:52 libocci.so -> libocci.so.11.1 -rw-r--r-- 1 root root 1879549 Aug 14 2009 libocci.so.11.1 -rw-r--r-- 1 root root 89377610 Aug 14 2009 libociei.so -rw-r--r-- 1 root root 152304 Aug 14 2009 libocijdbc11.so -rw-r--r-- 1 root root 1501651 Aug 14 2009 libsqlplusic.so -rw-r--r-- 1 root root 1218075 Aug 14 2009 libsqlplus.so -rw-r--r-- 1 root root 777979 Aug 14 2009 libsqora.so.11.1 -rw-r--r-- 1 root root 1996228 Aug 14 2009 ojdbc5.jar -rw-r--r-- 1 root root 2111220 Aug 14 2009 ojdbc6.jar -rw-r--r-- 1 root root 298388 Aug 14 2009 ottclasses.zip drwxr-xr-x 3 root root 4096 Mar 9 06:52 precomp -rw-r--r-- 1 root root 37807 Aug 14 2009 xstreams.jar no TNSPING available, no TNSNAMES.ORA, Now how to configure the mono to use this as the oracle client? and how to specify oracle database in app.config connection string section? Though mono is a powerful framework, seems like it is having problems like linux does, all the documentation is only available in mailing lists and whatever is available on official site is either outdated or not clear for normal user. Hope things will change soon and Mono will THE programming framework for linux.

    Read the article

  • libvirt + ESX (HTTP response code 400 for call to 'Login')

    - by Coops
    I'm trying to connect to a vSphere cluster using the information from the libvirt documentation. $ virsh -c "vpx://[email protected]/dc1/dc1-cluster-e01/dc1-vsphere-e04/?no_verify=1" Enter root's password for 10.51.4.11: error: internal error HTTP response code 400 for call to 'Login' error: failed to connect to the hypervisor I seem to be able to establish a connection, but it fails with a "HTTP code 400". If I provide the incorrect password it fails with a 'login credentials' error, so it looks like I am getting a connection, but it's failing for another reason. Wireshark is no help as it's all done over SSL/TLS. Any thoughts folks? UPDATE: 15:21 28/02/11 FYI - I'm running libvirt-0.8.3 (the Ubuntu package recompiled with the ESX flag enabled). When I put virsh into debug mode it returns this: [snip] Enter root's password for 10.51.4.11: 15:19:09.011: debug : do_open:1249 : driver 3 ESX returned ERROR 15:19:09.011: debug : virUnrefConnect:294 : unref connection 0x98aa8f8 1 15:19:09.011: debug : virReleaseConnect:249 : release connection 0x98aa8f8 error: internal error HTTP response code 400 for call to 'Login' error: failed to connect to the hypervisor

    Read the article

  • Search select statement

    - by Nana
    I am creating a page which would have different field for the user to search from. e.g. search by: Grade: -dropdownlist1- Student name: -dropdownlist2- Student ID: -dropdownlist3- Lessons: -dropdownlist4- Year: -dropdownlist5- How do I write the select statement for this? Each dropdownlist would need a select statement which would extract out different data from the database. But, I want to write ONE select statement which can dynamically choose the dropdownlist options. Instead of writing many many select statement. Lets say; Grade: -dropdownlist1- ; default value(all) Student name: -dropdownlist2-; default value(all) Student ID: -dropdownlist3-; 0-100 is choosen Lessons: -dropdownlist4-; A-C is choosen Year: -dropdownlist5-; 2009 is choosen

    Read the article

  • Trouble with LINQ databind to GridView and RowDataBound

    - by Michael
    Greetings all, I am working on redesigning my personal Web site using VS 2008 and have chosen to use LINQ to create by data-access layer. Part of my site will be a little app to help manage my budget better. My first LINQ query does successfully execute and display in a GridView, but when I try to use a RowDataBound event to work with the results and refine them a bit, I get the error: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) This interesting part is, if I just try to put in a var s = "s"; anywhere else in the same file, I get the same error too. If I go to other files in the web project, var s = "s"; compiles fine. Here is the LINQ Query call: public static IQueryable pubGetRecentTransactions(int param_accountid) { clsDataContext db; db = new clsDataContext(); var query = from d in db.tblMoneyTransactions join p in db.tblMoneyTransactions on d.iParentTransID equals p.iTransID into dp from p in dp.DefaultIfEmpty() where d.iAccountID == param_accountid orderby d.dtTransDate descending, d.iTransID ascending select new { d.iTransID, d.dtTransDate, sTransDesc = p != null ? p.sTransDesc : d.sTransDesc, d.sTransMemo, d.mTransAmt, d.iCheckNum, d.iParentTransID, d.iReconciled, d.bIsTransfer }; return query; } protected void Page_Load(object sender, EventArgs e) { if (!this.IsPostBack) { this.prvLoadData(); } } internal void prvLoadData() { prvCtlGridTransactions.DataSource = clsMoneyTransactions.pubGetRecentTransactions(2); prvCtlGridTransactions.DataBind(); } protected void prvCtlGridTransactions_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var datarow = e.Row.DataItem; var s = "s"; e.Row.Cells[0].Text = datarow.dtTransDate.ToShortDateString(); e.Row.Cells[1].Text = datarow.sTransDesc; e.Row.Cells[2].Text = datarow.mTransAmt.ToString("c"); e.Row.Cells[3].Text = datarow.iReconciled.ToString(); }//end if }//end RowDataBound My googling to date hasn't found a good answer, so I turn it over to this trusted community. I appreciate your time in assisting me.

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • LINQ: Create persistable Associations in Code, Without Foreign Key

    - by Alex
    Hello, I know that I can create LINQ Associations without a Foreign Key. The problem is, I've been doing this by adding the [Association] attribute in the DBML file (same as through the designer), which will get erased again after I refresh my database (and reload the entire table structure). I know that there is the MyData.cs file (as part of the DBML) in which I can place my partial extensions etc. to domain objects (to persist even after I refresh the DBML), but I don't know how to create an association there?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Advice about insert into SQLCE

    - by Alexander
    i am inserting about 1943 records by these function into SQLCE.This is my insert function.Parameters come from StringReader(string comes from webservice).This function executes 1943 times and takes about 20 seconds.I dropped table's indexes,what can i do to improve it?I create just 1 time mycomm and sqlceresultset. Public Function Insert_Function(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet) As String Try Dim rec As SqlCeUpdatableRecord = rs.CreateRecord() rec.SetInt32(0, IIf(f_Line(1) = "", DBNull.Value, f_Line(1))) rec.SetInt32(1, IIf(f_Line(2) = "", DBNull.Value, f_Line(2))) rec.SetInt32(2, IIf(f_Line(3) = "", DBNull.Value, f_Line(3))) rec.SetInt32(3, IIf(f_Line(4) = "", DBNull.Value, f_Line(4))) rec.SetValue(4, IIf(f_Line5(5) = "", DBNull.Value, f_Line(5))) rs.Insert(rec) rec = Nothing Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • How to call a scalar function in a stored procedure

    - by Luke101
    I am wacking y head over the problem with this code. DECLARE @root hierarchyid declare @lastchild hierarchyid SELECT @root = NodeHierarchyID from NodeHierarchy where ID = 1 set @lastchild = getlastchild(@root) it says it does not recognize getlastchild function. What am I doing wrong here

    Read the article

  • Acquiring Table Lock in Database - Interview Question

    - by harigm
    One of my interview Questions, if multiple users across the world are accessing the application, in which it uses a Table which has a Primary Key as Auto Increment Field. The Question how can you prevent the other user getting the Same Primary key when the other user is executing? My answer was I will obtain the Lock on the table and I will make the user to wait Until that user is released with the Primary key. But the Question How do you acquire the Table lock programmatically and implement this? If there are 1000 users coming every minute to the application, if you explicity hold the lock on the table, then the application will become slower? How do you manage this? Please suggest the possible answers for the above question

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • How to get the last element by date of each "type" in LINQ or TSQL

    - by Mauro
    Imagine to have a table defined as CREATE TABLE [dbo].[Price]( [ID] [int] NOT NULL, [StartDate] [datetime] NOT NULL, [Price] [int] NOT NULL ) where ID is the identifier of an action having a certain Price. This price can be updated if necessary by adding a new line with the same ID, different Price, and a more recent date. So with a set of a data like ID StartDate Price 1 01/01/2009 10 1 01/01/2010 20 2 01/01/2009 10 2 01/01/2010 20 How to obtain a set like the following? 1 01/01/2010 20 2 01/01/2010 20

    Read the article

  • Representing Sparse Data in PostgreSQL

    - by Chris S
    What's the best way to represent a sparse data matrix in PostgreSQL? The two obvious methods I see are: Store data in a single a table with a separate column for every conceivable feature (potentially millions), but with a default value of NULL for unused features. This is conceptually very simple, but I know that with most RDMS implementations, that this is typically very inefficient, since the NULL values ususually takes up some space. However, I read an article (can't find its link unfortunately) that claimed PG doesn't take up data for NULL values, making it better suited for storing sparse data. Create separate "row" and "column" tables, as well as an intermediate table to link them and store the value for the column at that row. I believe this is the more traditional RDMS solution, but there's more complexity and overhead associated with it. I also found PostgreDynamic, which claims to better support sparse data, but I don't want to switch my entire database server to a PG fork just for this feature. Are there any other solutions? Which one should I use?

    Read the article

  • MySQL optimized sentence

    - by Ivan
    I have a simple table where I have to extract some records. The problem is that the evaluation function is a very time-consuming stored procedure so I shouldn't to call it twice like in this sentence: SELECT *, slow_sp(row) FROM table WHERE slow_sp(row)>0 ORDER BY dist DESC LIMIT 10 First I thought in optimize like this: SELECT *, slow_sp(row) AS value FROM table WHERE value>0 ORDER BY dist DESC LIMIT 10 But it doesn't works due "value" is not processed when the WHERE clause is evaluated. Any idea to optimize this sentence? Thanks.

    Read the article

  • Msql Partitioning - Key vs Hash vs List vs Range

    - by Imran Omar Bukhsh
    I went through some of the documentation of mysql but cannot understand the difference in the following ways of partitioning : Key vs Hash vs List vs Range.Can someone explain in pure english? Also we have the following table: How do we partition by forum_id? CREATE TABLE IF NOT EXISTS `posts_content` ( `id` int(11) NOT NULL AUTO_INCREMENT, `post_id` int(11) NOT NULL, `forum_id` int(11) NOT NULL, `content` longtext CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=79850 ; Thanking you

    Read the article

  • Ajax Content Loading(Processing) image or indicator

    - by Arny
    Hi there, in part of my web page, I have couple of asp:image Thumbnails, onclick I use ajax modal popup extender to show the imgae in full size which are working fine, what I need to add is to have a processing image or indicator both in thumbnail and modal popup extender, I also have ajax autocomplete that is working fine, I need to add some indicator or processing image to it as soon as user start typing a word. any idea? Thanks in advance

    Read the article

  • Problem with interface implementation in partial classes.

    - by Bas
    I have a question regarding a problem with L2S, Autogenerated DataContext and the use of Partial Classes. I have abstracted my datacontext and for every table I use, I'm implementing a class with an interface. In the code below you can see I have the Interface and two partial classes. The first class is just there to make sure the class in the auto-generated datacontext inherets Interface. The other autogenerated class makes sure the method from Interface is implemented. namespace PartialProject.objects { public interface Interface { Interface Instance { get; } } //To make sure the autogenerated code inherits Interface public partial class Class : Interface { } //This is autogenerated public partial class Class { public Class Instance { get { return this.Instance; } } } } Now my problem is that the method implemented in the autogenerated class gives the following error: - Property 'Instance' cannot implement property from interface 'PartialProject.objects.Interface'. Type should be 'PartialProjects.objects.Interface'. <- Any idea how this error can be resolved? Keep in mind that I can't edit anything in the autogenerated code. Thanks in advance!

    Read the article

  • Date arithmetic using integer values

    - by Dave Jarvis
    Problem String concatenation is slowing down a query: date(extract(YEAR FROM m.taken)||'-1-1') d1, date(extract(YEAR FROM m.taken)||'-1-31') d2 This is realized in code as part of a string, which follows (where the p_ variables are integers): date(extract(YEAR FROM m.taken)||''-'||p_month1||'-'||p_day1||''') d1, date(extract(YEAR FROM m.taken)||''-'||p_month2||'-'||p_day2||''') d2 This part of the query runs in 3.2 seconds with the dates, and 1.5 seconds without, leading me to believe there is ample room for improvement. Question What is a better way to create the date (presumably without concatenation)? Many thanks!

    Read the article

  • How do I perform a batch insert in Django?

    - by Thierry Lam
    In mysql, you can insert multiple rows to a table in one query for n 0: INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9), ..., (n-2, n-1, n); Is there a way to achieve the above with Django queryset methods? Here's an example: values = [(1, 2, 3), (4, 5, 6), ...] for value in values: SomeModel.objects.create(first=value[0], second=value[1], third=value[2]) I believe the above is calling an insert query for each iteration of the for loop. I'm looking for a single query, is that possible in Django?

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >