Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 300/420 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Programmatically execute vim commands?

    - by Ben Gartner
    I'm interested in setting up a TDD environment for developing Vim scripts and rc files. As a simple example, say I want to have vim insert 8 spaces when I press the tab key. I would set up a script that did the following: Launch vim using a sandboxed .vimrc file press i press tab press esc press :w test_out assert that test_out contains ' ' by the default config in vim, this would fail. However, once I add set expandtab to my .vimrc file, the test will pass. So the question is, how do I programmatically issue these commands? 'vim -c ' is close, but seems to only work for ex mode commands. Any suggestions? This question seem to be thoroughly google-proof.

    Read the article

  • Designing a Tag table that tells how many times it's used

    - by Satoru.Logic
    Hi, all. I am trying to design a tagging system with a model like this: Tag: content = CharField creator = ForeignKey used = IntergerField It is a many-to-many relationship between tags and what's been tagged. Everytime I insert a record into the assotication table, Tag.used is incremented by one, and decremented by one in case of deletion. Tag.used is maintained because I want to speed up answering the question 'How many times this tag is used?'. However, this seems to slow insertion down obviously. Please tell me how to improve this design. Thanks in advance.

    Read the article

  • Why is sql server giving a conversion error when submitting date.today to a datetime column?

    - by kpierce8
    I am getting a conversion error every time I try to submit a date value to sql server. The column in sql server is a datetime and in vb I'm using Date.today to pass to my parameterized query. I keep getting a sql exception Conversion failed when converting datetime from character string. Here's the code Public Sub ResetOrder(ByVal connectionString As String) Dim strSQL As String Dim cn As New SqlConnection(connectionString) cn.Open() strSQL = "DELETE Tasks WHERE ProjID = @ProjectID" Dim cmd As New SqlCommand(strSQL, cn) cmd.Parameters.AddWithValue("ProjectID", 5) cmd.ExecuteNonQuery() strSQL = "INSERT INTO Tasks (ProjID, DueDate, TaskName) VALUES " & _ " (@ProjID, @TaskName, @DueDate)" Dim cmd2 As New SqlCommand(strSQL, cn) cmd2.CommandText = strSQL cmd2.Parameters.AddWithValue("ProjID", 5) cmd2.Parameters.AddWithValue("DueDate", Date.Today) cmd2.Parameters.AddWithValue("TaskName", "bob") cmd2.ExecuteNonQuery() cn.Close() DataGridView1.DataSource = ds.Projects DataGridView2.DataSource = ds.Tasks End Sub Any thoughts would be greatly appreciated.

    Read the article

  • Linux Kernel - Slab Allocator Question

    - by Drex
    I am playing around with the kernel and am looking at the kmem_cache files_cachep belonging to fork.c. It detects the sizeof(files_struct). My question is this: I have altered files_struct and added a rb_root (red/black tree root) using the built-in functionality in linux/rbtree.h. I can properly insert values into this tree. However, at some point, a segfault occurs and GDB backtraces the following information: (gdb) backtrace 0 0x08066ad7 in page_ok (page=) at arch/um/os-Linux/sys-i386/task_size.c:31 1 0x08066bdf in os_get_top_address () at arch/um/os-Linux/sys-i386/task_size.c:100 2 0x0804a216 in linux_main (argc=1, argv=0xbfb05f14) at arch/um/kernel/um_arch.c:277 3 0x0804acdc in main (argc=1, argv=0xbfb05f14, envp=0xbfb05f1c) at arch/um/os-Linux/main.c:150 I have spent many hours trying to figure out why there is a segfault given that the red/black tree inserts properly. I'm thinking it's a memory allocation issue with new processes made by fork() of a parent process. Could this be the case and could it have something to do with kmem_cache files_cachep?

    Read the article

  • Observing an NSMutableArray for insertion/removal

    - by Adam Ernst
    A class has a property (and instance var) of type NSMutableArray with synthesized accessors (via @property). If you observe this array using: [myObj addObserver:self forKeyPath:@"theArray" options:0 context:NULL]; And then insert an object in the array like this: [[myObj theArray] addObject:[NSString string]]; An observeValueForKeyPath... notification is not sent. However, the following does send the proper notification: [[myObj mutableArrayValueForKey:@"theArray"] addObject:[NSString string]]; This is because mutableArrayValueForKey returns a proxy object that takes care of notifying observers. But shouldn't the synthesized accessors automatically return such a proxy object? What's the proper way to work around this--should I write a custom accessor that just invokes [super mutableArrayValueForKey...]?

    Read the article

  • Best implementation for MySQL replication with Rails 3?

    - by vonconrad
    We're looking at potentially setting up replication for our primary MySQL database, and while setting up the replication seems pretty straight-forward, the application implementation seems a bit murkier. My first idea would be to set up a master-slave configuration and RW-splitting, with all write queries (CREATE, INSERT, UPDATE) going to master, and all read queries (SELECT) going to slave. Having read up on it, it seems that there are essentially two options for how to implement this with our app: Using an independent middleware layer for all MySQL connections, such as MySQL proxy or DBSlayer. However, the former is in Alpha and the latter has limited documentation. Using a Ruby-based gem/plugin, such as Octopus to achieve RW-splitting in the framework. If we wanted to go with a master-slave setup, what you recommend moving forward? The other thought I've had was to use a master-master configuration, but am unsure about the implementation of such a setup. Thoughts?

    Read the article

  • Why is insertion into my tree faster on sorted input than random input?

    - by Juliet
    Now I've always heard binary search trees are faster to build from randomly selected data than ordered data, simply because ordered data requires explicit rebalancing to keep the tree height at a minimum. Recently I implemented an immutable treap, a special kind of binary search tree which uses randomization to keep itself relatively balanced. In contrast to what I expected, I found I can consistently build a treap about 2x faster and generally better balanced from ordered data than unordered data -- and I have no idea why. Here's my treap implementation: http://pastebin.com/VAfSJRwZ And here's a test program: using System; using System.Collections.Generic; using System.Linq; using System.Diagnostics; namespace ConsoleApplication1 { class Program { static Random rnd = new Random(); const int ITERATION_COUNT = 20; static void Main(string[] args) { List<double> rndTimes = new List<double>(); List<double> orderedTimes = new List<double>(); rndTimes.Add(TimeIt(50, RandomInsert)); rndTimes.Add(TimeIt(100, RandomInsert)); rndTimes.Add(TimeIt(200, RandomInsert)); rndTimes.Add(TimeIt(400, RandomInsert)); rndTimes.Add(TimeIt(800, RandomInsert)); rndTimes.Add(TimeIt(1000, RandomInsert)); rndTimes.Add(TimeIt(2000, RandomInsert)); rndTimes.Add(TimeIt(4000, RandomInsert)); rndTimes.Add(TimeIt(8000, RandomInsert)); rndTimes.Add(TimeIt(16000, RandomInsert)); rndTimes.Add(TimeIt(32000, RandomInsert)); rndTimes.Add(TimeIt(64000, RandomInsert)); rndTimes.Add(TimeIt(128000, RandomInsert)); string rndTimesAsString = string.Join("\n", rndTimes.Select(x => x.ToString()).ToArray()); orderedTimes.Add(TimeIt(50, OrderedInsert)); orderedTimes.Add(TimeIt(100, OrderedInsert)); orderedTimes.Add(TimeIt(200, OrderedInsert)); orderedTimes.Add(TimeIt(400, OrderedInsert)); orderedTimes.Add(TimeIt(800, OrderedInsert)); orderedTimes.Add(TimeIt(1000, OrderedInsert)); orderedTimes.Add(TimeIt(2000, OrderedInsert)); orderedTimes.Add(TimeIt(4000, OrderedInsert)); orderedTimes.Add(TimeIt(8000, OrderedInsert)); orderedTimes.Add(TimeIt(16000, OrderedInsert)); orderedTimes.Add(TimeIt(32000, OrderedInsert)); orderedTimes.Add(TimeIt(64000, OrderedInsert)); orderedTimes.Add(TimeIt(128000, OrderedInsert)); string orderedTimesAsString = string.Join("\n", orderedTimes.Select(x => x.ToString()).ToArray()); Console.WriteLine("Done"); } static double TimeIt(int insertCount, Action<int> f) { Console.WriteLine("TimeIt({0}, {1})", insertCount, f.Method.Name); List<double> times = new List<double>(); for (int i = 0; i < ITERATION_COUNT; i++) { Stopwatch sw = Stopwatch.StartNew(); f(insertCount); sw.Stop(); times.Add(sw.Elapsed.TotalMilliseconds); } return times.Average(); } static void RandomInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for (int i = 0; i < insertCount; i++) { tree = tree.Insert(rnd.NextDouble()); } } static void OrderedInsert(int insertCount) { Treap<double> tree = new Treap<double>((x, y) => x.CompareTo(y)); for(int i = 0; i < insertCount; i++) { tree = tree.Insert(i + rnd.NextDouble()); } } } } And here's a chart comparing random and ordered insertion times in milliseconds: Insertions Random Ordered RandomTime / OrderedTime 50 1.031665 0.261585 3.94 100 0.544345 1.377155 0.4 200 1.268320 0.734570 1.73 400 2.765555 1.639150 1.69 800 6.089700 3.558350 1.71 1000 7.855150 4.704190 1.67 2000 17.852000 12.554065 1.42 4000 40.157340 22.474445 1.79 8000 88.375430 48.364265 1.83 16000 197.524000 109.082200 1.81 32000 459.277050 238.154405 1.93 64000 1055.508875 512.020310 2.06 128000 2481.694230 1107.980425 2.24 I don't see anything in the code which makes ordered input asymptotically faster than unordered input, so I'm at a loss to explain the difference. Why is it so much faster to build a treap from ordered input than random input?

    Read the article

  • InvalidOperationException When XML Serializing Inherited Class

    - by Nick
    I am having an issue serializing a c# class to an XML file that has a base class... here is a simple example: namespace Domain { [Serializable] public class ClassA { public virtual int MyProperty { get; set; } } } namespace Derived { public class ClassA : Domain.ClassA { public override int MyProperty { get { return 1; } set { /* Do Nothing */ } } } } When I attempt to serialize an instance of Derived.ClassA, I receive the following exception: InvalidOperationException(Types 'Domain.ClassA' and 'Derived.ClassA' both use the XML type name 'ClassA', from the namespace ". Use XML attributes to specify a unique XML name and/or namespace for the type.) The problem is that I want to create a single base class that simply defines the structure of the XML file, and then allow anyone else to derive from that class to insert business rules, but that the formatting will come through from the base. Is this possible, and if so, how do I attribute the base class to allow this?

    Read the article

  • Visual Studio 2008 Automatic line breaks in comments

    - by Pete Michaud
    When I write a comment, it's often a paragraph or a few lines that explains clearly what a bit of code is doing and why it's doing that. What I'd like is if I could start a comment, and have the editor automatically insert a line break and continue the comment to the nest line when I reach, say, 80 characters long. So I'd type: // Lorem ipsum dolor sit amet, consectetur adipiscing elit. < here the editor breaks automatically and continues onto the next line: // Etiam congue quam eget leo dignissim tincidunt.

    Read the article

  • MySQL - getting SUM of MAX results from 2 tables

    - by SODA
    Hi, Here's my problem: I have 2 identical tables (past month data, current month data) - data_2010_03, data_2010_04: Content_type (VARCHAR), content_id (INT), month_count (INT), pubDate (DATETIME) Data in month_count is updated hourly, so for each combination of content_type and content_id we insert new row, where value of month_count is incrementally updated. Now I try something like this: SELECT MAX(t1.month_count) AS max_1, MAX(t2.month_count) AS max_2, SUM(max_1 + max_2) AS result, t1.content_type, t1.content_id FROM data_2010_03 AS t1 JOIN data_2010_04 AS t2 ON t1.content_type = t2.content_type AND t1.content_id = t2.content_id WHERE t2.pubDate < '2010-04-08' AND t1.content_type = 'video' GROUP BY t1.content_id ORDER BY result desc, max_1 desc, max_2 desc LIMIT 0,10 I get an error "Unknown column 'max_1' in 'field list'. Please help.

    Read the article

  • Function that copies into byte vector reverses values

    - by xeross
    Hey, I've written a function to copy any variable type into a byte vector, however whenever I insert something it gets inserted in reverse. Here's the code. template <class Type> void Packet::copyToByte(Type input, vector<uint8_t>&output) { copy((uint8_t*) &input, ((uint8_t*) &input) + sizeof(Type), back_inserter(output)); } Now whenever I add for example a uint16_t with the value 0x2f1f it gets inserted as 1f 2f instead of the expected 2f 1f. What am I doing wrong here ? Regards, Xeross

    Read the article

  • Mixing Transaction Script pattern with DDD/CQRS

    - by Herman
    Hi all, Here is the situation, in order to support our legacy system, we need to insert to a table whenever a user logs in. This is basically an CRUD operation, so it doesn't really make sense to create repository/entity/command/event for this since this doesn't tie to any business rules at all. The only benefit to create a CQRS command is that this database write can happen asynchronously under that model. Which is a better route to take? Use CQRS, and then call a stored proc. when handling that command? Just call database directly in the controller (I am using asp.net mvc)

    Read the article

  • Can/Should you throw exceptions in a c# switch statement?

    - by Kettenbach
    Hi All, I have an insert query that returns an int. Based on that int I may wish to throw an exception. Is this appropriate to do within a switch statement? switch (result) { case D_USER_NOT_FOUND: throw new ClientException(string.Format("D User Name: {0} , was not found.", dTbx.Text)); case C_USER_NOT_FOUND: throw new ClientException(string.Format("C User Name: {0} , was not found.", cTbx.Text)); case D_USER_ALREADY_MAPPED: throw new ClientException(string.Format("D User Name: {0} , is already mapped.", dTbx.Text)); case C_USER_ALREADY_MAPPED: throw new ClientException(string.Format("C User Name: {0} , is already mapped.", cTbx.Text)); default: break; } I normally add break statements to switches but they will not be hit. Is this a bad design? Please share any opinions/suggestions with me. Thanks, ~ck in San Diego

    Read the article

  • SQL Server, Remote Stored Procedure, and DTC Transactions

    - by marc
    Our organization has a lot of its essential data in a mainframe Adabas database. We have ODBC access to this data and from C# have queried/updated it successfully using ODBC/Natural "stored procedures". What we'd like to be able to do now is to query a mainframe table from within SQL Server 2005 stored procs, dump the results into a table variable, massage it, and join the result with native SQL data as a result set. The execution of the Natural proc from SQL works fine when we're just selecting it; however, when we insert the result into a table variable SQL seems to be starting a distributed transaction that in turn seems to be wreaking havoc with our connections. Given that we're not performing updates, is it possible to turn off this DTC-escalation behavior? Any tips on getting DTC set up properly to talk to DataDirect's (formerly Neon Systems) Shadow ODBC driver?

    Read the article

  • LINQ to SQL:DataContext.SubmitChanges not updating immediately

    - by aximili
    I have a funny problem. Doing DataContext.SubmitChanges() updates Count() in one way but not in the other, see my comment in the code below.(DC is the DataContext) var compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); if (compliances.Count() == 0) // Insert if not exists { DC.DataCompliances.InsertOnSubmit(new DataCompliance { FKCompany = c.Id, FKComplianceCriteria = criteria.Id }); DC.SubmitChanges(); compliances = c.DataCompliances.Where(x => x.ComplianceCriteria.FKElement == e.Id); // At this point DC.DataCompliances.Count() has increased, // but compliances.Count() is still 0 // When I refresh the page however, it will be 1 } Why does that happen? I need to update compliances after inserting one. Does anyone have a solution?

    Read the article

  • firefox open local link to directory with explorer

    - by raffael
    On a Website for our internal use i show links to local files and folders. the links are like this: href="file://C:/example/" href="file://C:/example/test.odt" The Problem is now that the link to the directory does open in firefox itself with a useless directory listing. Useless because you can just see the files or open them but not copy, insert, delete... The link to the file work normal and the file is opend by OpenOffice. By changing the configuration of firefox and setting the following key to false I can open the directory in with explorer.exe but for the file I have to choose the right application. network.protocol-handler.expose.file Does someone know a way to get both to work like i want? Means that the Directory is shown by explorer.exe and all files are opened by the right application. This can be by configuring Firefox or windows, changing the links, or even by writing a small program which opens all the file protocol correctly and will be used as protocol handler for the file protocol in firefox. Thanks Raffael

    Read the article

  • PHP: Infinity loop and Time Limit!

    - by Jonathan
    Hi, I have a piece of code that fetches data by giving it an ID. If I give it an ID of 1230 for example, the code fetches an article data with an ID of 1230 from a web site (external) and insert it into a DB. Now, the problem is that I need to fetch all the articles, lets say from ID 00001 to 99999. If a do a 'for' loop, after 60 seconds the PHP internal time limit stops the loop. If a use some kind of header("Location: code.php?id=00001") or header("Location: code.php?id=".$ID) and increase $ID++ and then redirect to the same page the browser stops me because of the infinite loop or redirection problem. Please HELP!

    Read the article

  • How do I get the position of a result in the list after an order_by?

    - by Bob Bob
    I'm trying to find an efficient way to find the rank of an object in the database related to it's score. My naive solution looks like this: rank = 0 for q in Model.objects.all().order_by('score'): if q.name == 'searching_for_this' return rank rank += 1 It should be possible to get the database to do the filtering, using order_by: Model.objects.all().order_by('score').filter(name='searching_for_this') But there doesn't seem to be a way to retrieve the index for the order_by step after the filter. Is there a better way to do this? (Using python/django and/or raw SQL.) My next thought is to pre-compute ranks on insert but that seems messy.

    Read the article

  • RoR function help

    - by Aviatrix
    Can someone write a function for me on RoR , i simply don't have the time to study Ruby and RoR for just one time use. The function should do the following things : 1) have an array with variables 2) for each variable in the array execute 4-5 other functions get the results and insert them in another table in the same DB table name - refined CityName varchar Subdomain varchar = the varriable in the array Nearby text State varchar ZipCodes text AreaCodes text Some of the functions return arrays. i will really apreciate the help ! Thanks in advance.

    Read the article

  • Best data-structure to use for two ended sorted list

    - by fmark
    I need a collection data-structure that can do the following: Be sorted Allow me to quickly pop values off the front and back of the list Remain sorted after I insert a new value Allow a user-specified comparison function, as I will be storing tuples and want to sort on a particular value Thread-safety is not required Optionally allow efficient haskey() lookups (I'm happy to maintain a separate hash-table for this though) My thoughts at this stage are that I need a priority queue and a hash table, although I don't know if I can quickly pop values off both ends of a priority queue. I'm interested in performance for a moderate number of items (I would estimate less than 200,000). Another possibility is simply maintaining an OrderedDictionary and doing an insertion sort it every-time I add more data to it. Furthermore, are there any particular implementations in Python. I would really like to avoid writing this code myself.

    Read the article

  • Vim: Pasting from clipboard and automatically toggling :set paste

    - by Jonatan Littke
    Hey. When I paste things from the clipboard, they're normally (always) multilined, and in those cases (and those cases only), I'd like :set paste to be triggered, since otherwise the tabbing will increase with each line (you've all seen it!). Though the problem with :set paste is that it doesn't behave well with set smartindent, causing the cursor to jump to the beginning of a new line instead of at the correct indent. So I'd like to enable it for this instance only. I'm using Mac, sshing to a Debian machine with vim, and thus pasting in Insert mode using cmd-v. Cheers.

    Read the article

  • jquery document ready with Google

    - by cf_PhillipSenn
    This is how I load jQuery: <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> function OnLoad() { insert jQuery goodness here }; google.load("jquery", "1"); google.setOnLoadCallback(OnLoad); </script> But instead of function OnLoad() {, I'd like to use $(document).ready(function() {} so that it's like every example in every book and documentation snippet. How can I define: $ = jQuery?

    Read the article

  • Python and Plone help

    - by Grenko
    Im using the plone cms and am having trouble with a python script. I get a name error "the global name 'open' is not defined". When i put the code in a seperate python script it works fine and the information is being passed to the python script becuase i can print the query. Code is below: #Import a standard function, and get the HTML request and response objects. from Products.PythonScripts.standard import html_quote request = container.REQUEST RESPONSE = request.RESPONSE # Insert data that was passed from the form query=request.query #print query f = open("blast_query.txt","w") for i in query: f.write(i) return printed I also have a second question, can i tell python to open a file in in a certain directory for example, If the script is in a certain loaction i.e. home folder, but i want the script to open a file at home/some_directory/some_directory can it be done?

    Read the article

  • SQL SERVER 2008 Dynamic query problem

    - by priyanka.sarkar
    I have a dynamic query which reads like this Alter PROCEDURE dbo.mySP -- Add the parameters for the stored procedure here ( @DBName varchar(50), @tblName varchar(50) ) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here declare @string as varchar(50) declare @string1 as varchar(50) set @string1 = '[' + @DBName + ']' + '.[dbo].' + '[' + @tblName + ']' set @string = 'select * from ' + @string1 exec @string END I am calling like this dbo.mySP 'dbtest1','tblTest' And I am experiencing an error "Msg 203, Level 16, State 2, Procedure mySP, Line 27 The name 'select * from [dbtest1].[dbo].[tblTest]' is not a valid identifier." What is wrong? and How to overcome? Thanks in advance

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >