Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 309/428 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Swap unique indexed column values in database.

    - by Ramesh Soni
    I have a database table and one of the fields (not primary key) is having unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hack I know are: Delete both rows and re-insert them Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • Why is sql server giving a conversion error when submitting date.today to a datetime column?

    - by kpierce8
    I am getting a conversion error every time I try to submit a date value to sql server. The column in sql server is a datetime and in vb I'm using Date.today to pass to my parameterized query. I keep getting a sql exception Conversion failed when converting datetime from character string. Here's the code Public Sub ResetOrder(ByVal connectionString As String) Dim strSQL As String Dim cn As New SqlConnection(connectionString) cn.Open() strSQL = "DELETE Tasks WHERE ProjID = @ProjectID" Dim cmd As New SqlCommand(strSQL, cn) cmd.Parameters.AddWithValue("ProjectID", 5) cmd.ExecuteNonQuery() strSQL = "INSERT INTO Tasks (ProjID, DueDate, TaskName) VALUES " & _ " (@ProjID, @TaskName, @DueDate)" Dim cmd2 As New SqlCommand(strSQL, cn) cmd2.CommandText = strSQL cmd2.Parameters.AddWithValue("ProjID", 5) cmd2.Parameters.AddWithValue("DueDate", Date.Today) cmd2.Parameters.AddWithValue("TaskName", "bob") cmd2.ExecuteNonQuery() cn.Close() DataGridView1.DataSource = ds.Projects DataGridView2.DataSource = ds.Tasks End Sub Any thoughts would be greatly appreciated.

    Read the article

  • JavaScript onload/onreadystatechange not firing when dynamically adding a script tag to the page.

    - by spoon16
    I am developing a bookmarklet that requires a specific version of jQuery be loaded on the page. When I have to dynamically insert a jQuery script tag to meet the requirments of the bookmarklet I want to wait for the onload or onreadystatechange event on the script tag before executing any function that requires jQuery. For some reason the onload and/or onreadystatechange events do not fire. Any ideas on what I am doing wrong here? var tag = document.createElement("script"); tag.type = "text/javascript"; tag.src = "http://ajax.microsoft.com/ajax/jquery/jquery-" + version + ".min.js"; tag.onload = tag.onreadystatechange = function () { __log("info", "test"); __log("info", this.readyState); }; document.getElementsByTagName('head')[0].appendChild(tag); The FULL code: http://gist.github.com/405215

    Read the article

  • Observing an NSMutableArray for insertion/removal

    - by Adam Ernst
    A class has a property (and instance var) of type NSMutableArray with synthesized accessors (via @property). If you observe this array using: [myObj addObserver:self forKeyPath:@"theArray" options:0 context:NULL]; And then insert an object in the array like this: [[myObj theArray] addObject:[NSString string]]; An observeValueForKeyPath... notification is not sent. However, the following does send the proper notification: [[myObj mutableArrayValueForKey:@"theArray"] addObject:[NSString string]]; This is because mutableArrayValueForKey returns a proxy object that takes care of notifying observers. But shouldn't the synthesized accessors automatically return such a proxy object? What's the proper way to work around this--should I write a custom accessor that just invokes [super mutableArrayValueForKey...]?

    Read the article

  • Algorithms to find longest common prefix in a sliding window.

    - by nn
    Hi, I have written a Lempel Ziv compressor and decompressor. I am seeking to improve the time to search the dictionary for a phrase. I have considered K-M-P and Boyer-Moore, but I think an algorithm that adapts to changes in the dictionary would be faster. I've been reading that binary search trees (AVL or with splays) improve the performance of compression time considerably. What I fail to understand is how to bootstrap the binary search tree and insert/remove data. I'm not actually quite sure the significance of each node in the binary search. I am searching for phrases so will each character be considered a node? Also how and what is inserted/removed from the search tree as new data enters the dictionary and old data is removed? The binary search tree sounds like a good payoff since it can adapt to the dictionary, but I'm just not quite sure of how it's used.

    Read the article

  • MySQL - getting SUM of MAX results from 2 tables

    - by SODA
    Hi, Here's my problem: I have 2 identical tables (past month data, current month data) - data_2010_03, data_2010_04: Content_type (VARCHAR), content_id (INT), month_count (INT), pubDate (DATETIME) Data in month_count is updated hourly, so for each combination of content_type and content_id we insert new row, where value of month_count is incrementally updated. Now I try something like this: SELECT MAX(t1.month_count) AS max_1, MAX(t2.month_count) AS max_2, SUM(max_1 + max_2) AS result, t1.content_type, t1.content_id FROM data_2010_03 AS t1 JOIN data_2010_04 AS t2 ON t1.content_type = t2.content_type AND t1.content_id = t2.content_id WHERE t2.pubDate < '2010-04-08' AND t1.content_type = 'video' GROUP BY t1.content_id ORDER BY result desc, max_1 desc, max_2 desc LIMIT 0,10 I get an error "Unknown column 'max_1' in 'field list'. Please help.

    Read the article

  • Programmatically execute vim commands?

    - by Ben Gartner
    I'm interested in setting up a TDD environment for developing Vim scripts and rc files. As a simple example, say I want to have vim insert 8 spaces when I press the tab key. I would set up a script that did the following: Launch vim using a sandboxed .vimrc file press i press tab press esc press :w test_out assert that test_out contains ' ' by the default config in vim, this would fail. However, once I add set expandtab to my .vimrc file, the test will pass. So the question is, how do I programmatically issue these commands? 'vim -c ' is close, but seems to only work for ex mode commands. Any suggestions? This question seem to be thoroughly google-proof.

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • SQLAlchemy - how to map against a read-only (or calculated) property

    - by Jeff Peck
    I'm trying to figure out how to map against a simple read-only property and have that property fire when I save to the database. A contrived example should make this more clear. First, a simple table: meta = MetaData() foo_table = Table('foo', meta, Column('id', String(3), primary_key=True), Column('description', String(64), nullable=False), Column('calculated_value', Integer, nullable=False), ) What I want to do is set up a class with a read-only property that will insert into the calculated_value column for me when I call session.commit()... import datetime def Foo(object): def __init__(self, id, description): self.id = id self.description = description @property def calculated_value(self): self._calculated_value = datetime.datetime.now().second + 10 return self._calculated_value According to the sqlalchemy docs, I think I am supposed to map this like so: mapper(Foo, foo_table, properties = { 'calculated_value' : synonym('_calculated_value', map_column=True) }) The problem with this is that _calculated_value is None until you access the calculated_value property. It appears that SQLAlchemy is not calling the property on insertion into the database, so I'm getting a None value instead. What is the correct way to map this so that the result of the "calculated_value" property is inserted into the foo table's "calculated_value" column?

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Can/Should you throw exceptions in a c# switch statement?

    - by Kettenbach
    Hi All, I have an insert query that returns an int. Based on that int I may wish to throw an exception. Is this appropriate to do within a switch statement? switch (result) { case D_USER_NOT_FOUND: throw new ClientException(string.Format("D User Name: {0} , was not found.", dTbx.Text)); case C_USER_NOT_FOUND: throw new ClientException(string.Format("C User Name: {0} , was not found.", cTbx.Text)); case D_USER_ALREADY_MAPPED: throw new ClientException(string.Format("D User Name: {0} , is already mapped.", dTbx.Text)); case C_USER_ALREADY_MAPPED: throw new ClientException(string.Format("C User Name: {0} , is already mapped.", cTbx.Text)); default: break; } I normally add break statements to switches but they will not be hit. Is this a bad design? Please share any opinions/suggestions with me. Thanks, ~ck in San Diego

    Read the article

  • Function that copies into byte vector reverses values

    - by xeross
    Hey, I've written a function to copy any variable type into a byte vector, however whenever I insert something it gets inserted in reverse. Here's the code. template <class Type> void Packet::copyToByte(Type input, vector<uint8_t>&output) { copy((uint8_t*) &input, ((uint8_t*) &input) + sizeof(Type), back_inserter(output)); } Now whenever I add for example a uint16_t with the value 0x2f1f it gets inserted as 1f 2f instead of the expected 2f 1f. What am I doing wrong here ? Regards, Xeross

    Read the article

  • Best implementation for MySQL replication with Rails 3?

    - by vonconrad
    We're looking at potentially setting up replication for our primary MySQL database, and while setting up the replication seems pretty straight-forward, the application implementation seems a bit murkier. My first idea would be to set up a master-slave configuration and RW-splitting, with all write queries (CREATE, INSERT, UPDATE) going to master, and all read queries (SELECT) going to slave. Having read up on it, it seems that there are essentially two options for how to implement this with our app: Using an independent middleware layer for all MySQL connections, such as MySQL proxy or DBSlayer. However, the former is in Alpha and the latter has limited documentation. Using a Ruby-based gem/plugin, such as Octopus to achieve RW-splitting in the framework. If we wanted to go with a master-slave setup, what you recommend moving forward? The other thought I've had was to use a master-master configuration, but am unsure about the implementation of such a setup. Thoughts?

    Read the article

  • Mixing Transaction Script pattern with DDD/CQRS

    - by Herman
    Hi all, Here is the situation, in order to support our legacy system, we need to insert to a table whenever a user logs in. This is basically an CRUD operation, so it doesn't really make sense to create repository/entity/command/event for this since this doesn't tie to any business rules at all. The only benefit to create a CQRS command is that this database write can happen asynchronously under that model. Which is a better route to take? Use CQRS, and then call a stored proc. when handling that command? Just call database directly in the controller (I am using asp.net mvc)

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • Visual Studio 2008 Automatic line breaks in comments

    - by Pete Michaud
    When I write a comment, it's often a paragraph or a few lines that explains clearly what a bit of code is doing and why it's doing that. What I'd like is if I could start a comment, and have the editor automatically insert a line break and continue the comment to the nest line when I reach, say, 80 characters long. So I'd type: // Lorem ipsum dolor sit amet, consectetur adipiscing elit. < here the editor breaks automatically and continues onto the next line: // Etiam congue quam eget leo dignissim tincidunt.

    Read the article

  • InvalidOperationException When XML Serializing Inherited Class

    - by Nick
    I am having an issue serializing a c# class to an XML file that has a base class... here is a simple example: namespace Domain { [Serializable] public class ClassA { public virtual int MyProperty { get; set; } } } namespace Derived { public class ClassA : Domain.ClassA { public override int MyProperty { get { return 1; } set { /* Do Nothing */ } } } } When I attempt to serialize an instance of Derived.ClassA, I receive the following exception: InvalidOperationException(Types 'Domain.ClassA' and 'Derived.ClassA' both use the XML type name 'ClassA', from the namespace ". Use XML attributes to specify a unique XML name and/or namespace for the type.) The problem is that I want to create a single base class that simply defines the structure of the XML file, and then allow anyone else to derive from that class to insert business rules, but that the formatting will come through from the base. Is this possible, and if so, how do I attribute the base class to allow this?

    Read the article

  • firefox open local link to directory with explorer

    - by raffael
    On a Website for our internal use i show links to local files and folders. the links are like this: href="file://C:/example/" href="file://C:/example/test.odt" The Problem is now that the link to the directory does open in firefox itself with a useless directory listing. Useless because you can just see the files or open them but not copy, insert, delete... The link to the file work normal and the file is opend by OpenOffice. By changing the configuration of firefox and setting the following key to false I can open the directory in with explorer.exe but for the file I have to choose the right application. network.protocol-handler.expose.file Does someone know a way to get both to work like i want? Means that the Directory is shown by explorer.exe and all files are opened by the right application. This can be by configuring Firefox or windows, changing the links, or even by writing a small program which opens all the file protocol correctly and will be used as protocol handler for the file protocol in firefox. Thanks Raffael

    Read the article

  • Question About DateCreated and DateModified Columns - MS SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet... Any help will be appreciated.

    Read the article

  • Vim: Pasting from clipboard and automatically toggling :set paste

    - by Jonatan Littke
    Hey. When I paste things from the clipboard, they're normally (always) multilined, and in those cases (and those cases only), I'd like :set paste to be triggered, since otherwise the tabbing will increase with each line (you've all seen it!). Though the problem with :set paste is that it doesn't behave well with set smartindent, causing the cursor to jump to the beginning of a new line instead of at the correct indent. So I'd like to enable it for this instance only. I'm using Mac, sshing to a Debian machine with vim, and thus pasting in Insert mode using cmd-v. Cheers.

    Read the article

  • An attempt has been made to Attach or Add an entity that is not new Linq to Sql error

    - by Collin Oconnor
    I have a save function for my order entity that looks like this and it breaks on the sumbmitChanges line: public void SaveOrder ( Order order ) { if (order.OrderId == 0) orderTable.InsertOnSubmit(order); else if (orderTable.GetOriginalEntityState(order) == null) { orderTable.Attach(order); orderTable.Context.Refresh(RefreshMode.KeepCurrentValues , order); } orderTable.Context.SubmitChanges(); } The order entity contains two other entities; an Address entity and a credit card entity. Now i want these two entities to be null sometimes. Now my guess for why this is throwing an error is because that both of these entites that are inside order are null. If this is the case, How can I insert an new order into the database with both entities (Address and creditCard) being null.

    Read the article

  • PHP: Infinity loop and Time Limit!

    - by Jonathan
    Hi, I have a piece of code that fetches data by giving it an ID. If I give it an ID of 1230 for example, the code fetches an article data with an ID of 1230 from a web site (external) and insert it into a DB. Now, the problem is that I need to fetch all the articles, lets say from ID 00001 to 99999. If a do a 'for' loop, after 60 seconds the PHP internal time limit stops the loop. If a use some kind of header("Location: code.php?id=00001") or header("Location: code.php?id=".$ID) and increase $ID++ and then redirect to the same page the browser stops me because of the infinite loop or redirection problem. Please HELP!

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >