Search Results

Search found 16643 results on 666 pages for 'stackoverflow answer'.

Page 498/666 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • How to customize the process employed by WCF when serializing contract method arguments?

    - by mark
    Dear ladies and sirs. I would like to formulate a contrived scenario, which nevertheless has firm actual basis. Imagine a collection type COuter, which is a wrapper around an instance of another collection type CInner. Both implement IList (never mind the T). Furthermore, a COuter instance is buried inside some object graph, the root of which (let us refer to it as R) is returned from a WCF service method. My question is how can I customize the WCF serialization process, so that when R is returned, the request to serialize the COuter instance will be routed through my code, which will extract CInner and pass it to the serializer instead. Thus the receiving end still gets R, only no COuter instance is found in the object graph. I hoped that http://stackoverflow.com/questions/2220516/how-does-wcf-serialize-the-method-call will contain the answer, unfortunately the article mentioned there (http://msdn.microsoft.com/en-us/magazine/cc163569.aspx) only barely mentions that advanced serialization scenarios are possible using IDataContractSurrogate interface, but no details are given. I am, on the other hand, would really like to see a working example. Thank you very much in advance.

    Read the article

  • ASP.NET AsyncPostBackTrigger disables button's OnClick function???

    - by hahuang65
    I saw another post like this: http://stackoverflow.com/questions/1795621/asyncpostbacktrigger-disables-buttons But I don't really know what to make of it. The accepted answer was poorly typed. Basically, I have a button with an OnClick function. I also have a UpdatePanel, with is AsyncPostBackTrigger set to that same button. It seems that if I do this, my Button no longer does ANYTHING. I guess it's because I have a Page_Load() event... Can anyone explain why this is? And how should I set up my webpage if I can't use the Page_Load() function? Oh and if I put my Button in a UpdatePanel, it also won't do anything, probably because of the same reason. This is a section of my code: <asp:FileUpload id="FileUploadControl" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" OnClick="uploadClicked" /> <br /><br /> <asp:Label runat="server" id="StatusLabel" text="Upload status: " /> <br /> <asp:UpdatePanel ID="UpdatePanel2" runat="server" updatemode="Conditional" > <ContentTemplate> <asp:DropDownList ID="songList" runat="server" /> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="RefreshButton" /> </Triggers> </asp:UpdatePanel> Thanks in advance.

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Why can't pass Marshaled interface as integer(or pointer)

    - by cemick
    I passed ref of interface from Visio Add-ins to MyCOMServer (http://stackoverflow.com/questions/2455183/interface-marshalling-in-delphi).I have to pass interface as pointer in internals method of MyCOMServer. I try to pass interface to internal method as pointer of interface, but after back cast when i try call method of interface I get exception. Simple example(Fisrt block execute without error, but At Second block I get Exception after addressed to property of IVApplication interface): procedure TMyCOMServer.test(const Interface_:IDispatch); stdcall; var IMy:_IMyInterface; V: Variant; Str: String; I: integer; Vis: IVApplication; begin ...... Self.QuaryInterface(_IMyInterface,IMy); str := IA.ApplicationName; V := Integer(IMy); i := V; Pointer(IMy) := Pointer(i); str := IMy.SomeProperty; // normal completion str := (Interface_ as IVApplication).Path; V := Interface_; I := V; Pointer(Vis) := Pointer(i); str := Vis.Path; // 'access violation at 0x76358e29: read of address 0xfeeefeee' end; Why I can't do like this?

    Read the article

  • Floating Point Arithmetic - Modulo Operator on Double Type

    - by CrimsonX
    So I'm trying to figure out why the modulo operator is returning such a large unusual value. If I have the code: double result = 1.0d % 0.1d; it will give a result of 0.09999999999999995. I would expect a value of 0 Note this problem doesn't exist using the dividing operator - double result = 1.0d / 0.1d; will give a result of 10.0, meaning that the remainder should be 0. Let me be clear: I'm not surprised that an error exists, I'm surprised that the error is so darn large compared to the numbers at play. 0.0999 ~= 0.1 and 0.1 is on the same order of magnitude as 0.1d and only one order of magnitude away from 1.0d. Its not like you can compare it to a double.epsilon, or say "its equal if its < 0.00001 difference". I've read up on this topic on StackOverflow, in the following posts one two three, amongst others. Can anyone suggest explain why this error is so large? Any any suggestions to avoid running into the problems in the future (I know I could use decimal instead but I'm concerned about the performance of that).

    Read the article

  • Unit testing that an event is raised in C#, using reflection

    - by Thomas
    I want to test that setting a certain property (or more generally, executing some code) raises a certain event on my object. In that respect my problem is similar to http://stackoverflow.com/questions/248989/unit-testing-that-an-event-is-raised-in-c, but I need a lot of these tests and I hate boilerplate. So I'm looking for a more general solution, using reflection. Ideally, I would like to do something like this: [TestMethod] public void TestWidth() { MyClass myObject = new MyClass(); AssertRaisesEvent(() => { myObject.Width = 42; }, myObject, "WidthChanged"); } For the implementation of the AssertRaisesEvent, I've come this far: private void AssertRaisesEvent(Action action, object obj, string eventName) { EventInfo eventInfo = obj.GetType().GetEvent(eventName); int raisedCount = 0; Action incrementer = () => { ++raisedCount; }; Delegate handler = /* what goes here? */; eventInfo.AddEventHandler(obj, handler); action.Invoke(); eventInfo.RemoveEventHandler(obj, handler); Assert.AreEqual(1, raisedCount); } As you can see, my problem lies in creating a Delegate of the appropriate type for this event. The delegate should do nothing except invoke incrementer. Because of all the syntactic syrup in C#, my notion of how delegates and events really work is a bit hazy. How to do this?

    Read the article

  • How do I search a NTEXT column for XML attributes and update the values? MS SQL 2005

    - by Alan
    Duplicate: this exact question was asked by the same author in http://stackoverflow.com/questions/1221583/how-do-i-update-a-xml-string-in-an-ntext-column-in-sql-server. Please close this one and answer in the original question. I have a SQL table with 2 columns. ID(int) and Value(ntext) The value rows have all sorts of xml strings in them. ID Value -- ------------------ 1 <ROOT><Type current="TypeA"/></ROOT> 2 <XML><Name current="MyName"/><XML> 3 <TYPE><Colour current="Yellow"/><TYPE> 4 <TYPE><Colour current="Yellow" Size="Large"/><TYPE> 5 <TYPE><Colour current="Blue" Size="Large"/><TYPE> 6 <XML><Name current="Yellow"/><XML> How do I: A. List the rows where <TYPE><Colour current="Yellow", bearing in mind that there is an entry <XML><Name current="Yellow"/><XML> B. Modify the rows that contain <TYPE><Colour current="Yellow" to be <TYPE><Colour current="Purple" Thanks! 4 your help

    Read the article

  • Predicate crashing iPhone App!

    - by DVG
    To preface, this is a follow up to an inquiry made a few days ago: http://stackoverflow.com/questions/2981803/iphone-app-crashes-when-merging-managed-object-contexts Short Version: EXC_BAD_ACCESS is crashing my app, and zombie-mode revealed the culprit to be my predicate embedded within the fetch request embedded in my Fetched Results Controller. How does an object within an object get released without an explicit command to do so? Long Version: Application Structure Platforms View Controller - Games View Controller (Predicated upon platform selection) - Add Game View Controller When a row gets clicked on the Platforms view, it sets an instance variable in Games View for that platform, then the Games Fetched Results Controller builds a fetch request in the normal way: - (NSFetchedResultsController *)fetchedResultsController{ if (fetchedResultsController != nil) { return fetchedResultsController; } //build the fetch request for Games NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Game" inManagedObjectContext:context]; [request setEntity:entity]; //predicate NSPredicate *predicate = [NSPredicate predicateWithFormat:@"platform == %@", selectedPlatform]; [request setPredicate:predicate]; //sort based on name NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; //fetch and build fetched results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:request managedObjectContext:context sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; [sortDescriptor release]; [sortDescriptors release]; [predicate release]; [request release]; [aFetchedResultsController release]; return fetchedResultsController; } At the end of this method, the fetchedResultsController's _fetch_request - _predicate member is set to an NSComparisonPredicate object. All is well in the world. By the time - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section gets called, the _predicate is now a Zombie, which will eventually crash the application when the table attempts to update itself. I'm more or less flummoxed. I'm not releasing the fetched results controller or any of it's parts, and the only part getting dealloc'd is the predicate. Any ideas?

    Read the article

  • Doctrine Default Primary Key Problem (Again)

    - by 01010011
    Hi, Should I change all of my uniquely-named MySQL database primary keys to 'id' to avoid getting errors related to Doctrine's default primary key set in the plugin 'doctrine_pi.php'? To further elaborate, I am getting the following reoccurring error, this time after trying to login to my login page: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'u.book_id' in 'field list'' in... I suspect the problem resides at a MySQL table used for my login, of which has a primary key called id Marc B originally solved an identical problem for me in this post http://stackoverflow.com/questions/2702229/doctrine-codeigniter-mysql-crud-errors when I had the same problem with a different table within the same database. Following his suggestion, I changed the default primary key located at system/application/plugins/doctrine_pi.php from 'id' to 'book_id': <?php // system/application/plugins/doctrine_pi.php ... // set the default primary key to be named 'id', integer, 4 bytes Doctrine_Manager::getInstance()->setAttribute( Doctrine::ATTR_DEFAULT_IDENTIFIER_OPTIONS, array('name' => 'book_id', 'type' => 'integer', 'length' => 4)); and that solved my previous problem. However, my login page stopped working. So what is the safe thing to do? Change all of my primary keys to 'id' (will that solve the problem without causing some other problem I am not aware of). Or should I add some lines of code in doctrine_pi.php?

    Read the article

  • Java: Clearing up the confusion on what causes a connection reset

    - by Zombies
    There seems to be some confusion as well contradicting statements on various SO answers: http://stackoverflow.com/questions/585599/whats-causing-my-java-net-socketexception-connection-reset . You can see here that the accepted answer states that the connection was closed by other side. But this is not true, closing a connection doesn't cause a connection reset. It is cauesed by "an underlying TCP/IP error." What I want to know is if a SocketException: Connection reset means really besides "unerlying TCP/IP Error." What really causes this? As I doubt it has anything to do with the connection being closed (since closing a connection isn't an exception worthy flag, and reading from a closed connection is, but that isn't an "underlying TCP/IP error." My hypothesis is this Connection reset is caused from a server's failure to acknowledge an ACK packet (either wholly or just improperly as per TCP/IP). And that a SocketTimeoutException is generated only when no data is generated to be read (since this is thrown during a read after a certain duration, and read is waiting for data, but is not concerned with ACK packets). In other words, read() throws SocketTimeoutException if it didn't read any bytes of actual data (DATA LAYER) in its allotted time.

    Read the article

  • How to deploy a number of disparate project types?

    - by niteice
    This question is similar to http://stackoverflow.com/questions/1900269/whats-the-best-way-to-deploy-an-executable-process-on-a-web-server. The situation is this: I'm developing a product that needs to be deployed to a web server. It consists of 4 website projects, a background service, a couple of command-line tools, and two assemblies shared by all of these components. Now, I also happen to administer the server that this product will be deployed on. So I'm familiar with everything that may need to be done to perform an update: Copy website files Replace the service binary Install updated components in the GAC Configure IIS Update database schema After some research it seems that, to reduce deployment time and to be able to let the other sysadmins handle deployment, I want to deploy all of these as an MSI, except that I don't know a thing about installers. I know VS can generate web deployment projects, but where do I go from there? Being able to simply click Next a few times on an installer is my goal for deploying updates. It would also be nice to modularize it, so for example, I could distribute the four websites among multiple servers and have everything appear as individual components in the installer, and as one entity in Add/Remove Programs. Is all of this too much to ask in a single package?

    Read the article

  • C#/.NET library for source code formatting, like the one used by Stack Overflow?

    - by Lasse V. Karlsen
    I am writing a command line tool to convert Markdown text to html output, which seems easy enough. However, I am wondering how to get nice syntax coloring for embedded code blocks, like the one used by Stack Overflow. Does anyone know either: What library StackOverflow is using or if there's a library out there that I can easily reuse? Basically it would need to have some of the same "intelligence" found in the one that Stack Overflow uses, by basically doing a best-attempt at figuring out the language in use to pick the right keywords. Basically, what I want is for my own program to handle a block like the following: if (a == 0) return true; if (a == 1) return false; // fall-back Markdown Sharp, the library I'm using, by default outputs the above as a simple pre/code html block, with no syntax coloring. I'd like the same type of handling as the formatting on Stack Overflow does, the above contains blue "return" keywords for example. Or, hmm, after checking the source of this Stack Overflow page after adding the code example, I notice that it too is formatted like a simple pre/code block. Is it pure javascript-magic at works here, so perhaps there's no such library? If there's no library that will automagically determine a possible language by the keywords used, is there one that would work if I explicitly told it the language? Since this is "my" markdown-commandline-tool, I can easily add syntax if I need to.

    Read the article

  • NSFetchedResultsController sections localized sorted

    - by Gerd
    How could I use the NSFetchedResultsController with translated sort key and sectionKeyPath? Problem: I have ID in the property "type" in the database like typeA, typeB, typeC,... and not the value directly because it should be localized. In English typeA=Bird, typeB=Cat, typeC=Dog in German it would be Vogel, Katze, Hund. With a NSFetchedResultController with sort key and sectionKeyPath on "type" I receive the order and sections - typeA - typeB - typeC Next I translate for display and everything is fine in English: - Bird - Cat - Dog Now I switch to German and receive a wrong sort order - Vogel - Katze - Hund because it still sorts by typeA, typeB, typeC So I'm looking for a way to localize the sort for the NSFetchedResultsController. I tried the transient property approach, but this doesn't work for the sort key because the sort key need to be in the entity. I have no other idea. But I can't believe that's not possible to use NSFetchedResultsController on a derived attribute required for localization? There are related discussions like http://stackoverflow.com/questions/1384345/using-custom-sections-with-nsfetchedresultscontroller but the difference is that the custom section names and the sort key have probably the same order. Not in my case and this is the main difference. At the end I would need a sort order for the necessary NSSortDescriptor on a derived attribute, I guess. This sort order has also to serve for the sectionKeyPath. Thanks for any hint.

    Read the article

  • finding ALL cycles in a huge sparse matrix

    - by Andy
    Hi there, First of all I'm quite a Java beginner, so I'm not sure if this is even possible! Basically I have a huge (3+million) data source of relational data (i.e. A is friends with B+C+D, B is friends with D+G+Z (but not A - i.e. unmutual) etc.) and I want to find every cycle within this (not necessarily connected) directed graph. I've found this thread (http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) which has pointed me to Donald Johnson's (elementary) cycle-finding algorithm which, superficially at least, looks like it'll do what I'm after (I'm going to try when I'm back at work on Tuesday - thought it wouldn't hurt to ask in the meanwhile!). I had a quick scan through the code of the Java implementation of Johnson's algorithm (in that thread) and it looks like a matrix of relations is the first step, so I guess my questions are: a) Is Java capable of handling a 3+million*3+million matrix? (was planning on representing A-friends-with-B by a binary sparse matrix) b) Do I need to find every connected subgraph as my first problem, or will cycle-finding algorithms handle disjoint data? c) Is this actually an appropriate solution for the problem? My understanding of "elementary" cycles is that in the graph below, rather than picking out A-B-C-D-E-F it'll pick out A-B-F, B-C-D etc. but that's not the end of the world given the task. E / \ D---F / \ / \ C---B---A d) If necessary, I can simplify the problem by enforcing mutuality in relations - i.e. A-friends-with-B <== B-friends-with-A, and if really necessary I can maybe cut down the data size, but realistically it is always going to be around the 1mil mark. z) Is this a P or NP task?! Am I biting off more than I can chew? Thanks all, any help appreciated! Andy

    Read the article

  • Global Temporary Table "On commit delete rows" functionality discrepancy.

    - by TomatoSandwich
    I have a global temporary table. I called my GTT, for that was it's initials. My GTT never hurt anyone, and did everything I bade of it. I asked my GTT to delete rows on commit. This is a valid function in the creation script of my GTT in oracle. I wanted to be able to have different users see GTT with their own data, and not the data of other people's sessions. 'Delete rows on commit' worked perfectly in our test environment. GTT and I were happy. But then, I deployed GTT as part of an update to functionality to a client's database. The database doesn't like to play well with GTT. GTT called me up all upset and worried, because it wasn't holding any data any more, and didn't know why. GTT told me that if someone did: insert into my_GTT (description) values ('Happy happy joy joy') he would sing-song back: 1 row inserted. However, if the same person tried select * from my_GTT; GTT didn't know what to do, and he replied 0 rows returned. GTT was upset that he didn't know what his playmate had inserted. Please, Stackoverflow, why would GTT forget what was placed into him? He can remember perfectly well at home, but out in the cold harsh world, he just gets so scared. :(

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • PHP: Profiling code and strict environment ~ Improving my coding

    - by DavidYell
    I would like to update my local working environment to be stricter in an effort to improve my code. I know that my code is okay, but as with most things there is always room for improvement. I use XAMPP on my local machine, for simplicities sake Apache Friends XAMPP (Basic Package) version 1.7.2 So I've updated my php.ini : error_reporting to be E_ALL | E_STRICT to help with the code standard. I've also enabled the XDebug extension zend_extension = "C:\xampp\php\ext\php_xdebug.dll" which seems to be working, having tested some broken code and got the nice standard orange error notice. However, having read this question, http://stackoverflow.com/questions/133686/what-is-the-best-way-to-profile-php-code and enabled the profiler, I cannot seem to create a cachegrind file. Many of the guides that I've looked at seem to think you need to install XDebug in XAMPP which leads me to think they are out of date, as XDebug is bundled with XAMPP these days. So I would appreciate it if anyone can help point me in the right direction with both configuring XDebug to output grind files, and or just a great set of default settings for the XDebug config in XAMPP. Seems there is very little documentation to go on. If people have tips on integrating these tools with Netbeans, that would be awesomesauce. I'm happy to get suggestions on other things that I can do to help tighten up my php code, both syntactically and performance wise Thanks, and apologies for the rambling question(s)! Ninja edit I should menion that I'm using named vhosts as my Apache configuration, which I think is why running XDebug on port 9000 isn't working for me. I guess I'd need to edit my vhost to include port 9000

    Read the article

  • Remove duplicate records/objects uniquely identified by multiple attributes

    - by keruilin
    I have a model called HeroStatus with the following attributes: id user_id recordable_type hero_type (can be NULL!) recordable_id created_at There are over 100 hero_statuses, and a user can have many hero_statuses, but can't have the same hero_status more than once. A user's hero_status is uniquely identified by the combination of recordable_type + hero_type + recordable_id. What I'm trying to say essentially is that there can't be a duplicate hero_status for a specific user. Unfortunately, I didn't have a validation in place to assure this, so I got some duplicate hero_statuses for users after I made some code changes. For example: user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2010-05-03 18:30:30' user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2009-03-03 15:30:00' user_id = 18 recordable_type = 'Good' hero_type = 'Hugs' recordable_id = 1 created_at = '2009-02-03 12:30:00' user_id = 18 recordable_type = 'Good' hero_type = NULL recordable_id = 2 created_at = '2009-012-03 08:30:00' (Last two are not a dups obviously. First two are.) So what I want to do is get rid of the duplicate hero_status. Which one? The one with the most-recent date. I have three questions: How do I remove the duplicates using a SQL-only approach? How do I remove the duplicates using a pure Ruby solution? Something similar to this: http://stackoverflow.com/questions/2790004/removing-duplicate-objects. How do I put a validation in place to prevent duplicate entries in the future?

    Read the article

  • Representing complex scheduled reoccurance in a database

    - by David Pfeffer
    I have the interesting problem of representing complex schedule data in a database. As a guideline, I need to be able to represent the entirety of what the iCalendar -- ics -- format can represent, but in a database. I'm not actually implementing anything relating to ics, but it gives a good scope of the type of rules I need to be able to model. I need to allow allow representation of a single event or a reoccurring event based on multiple times per day, days of the week, week of a month, month, year, or some combination of those. For example, the third Thursday in November annually, or the 25th of December annually, or every two weeks starting November 2 and continuing until September 8 the following year. I don't care about insertion efficiency but query efficiency is critical. The operation I will be doing most often is providing either a single date/time or a date/time range, and trying to determine if the defined schedule matches any part of the date/time range. Other operations can be slower. For example, given January 15, 2010 at 10:00 AM through January 15, 2010 at 11:00 AM, find all schedules that match at least part of that time. (i.e. a schedule that covers 10:30 - 11:00 still matches.) Any suggestions? I looked at http://stackoverflow.com/questions/1016170/how-would-one-represent-scheduled-events-in-an-rdbms but it doesn't cover the scope of the type of reoccurance rules I'd like to model.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • function pointers callbacks C

    - by robUK
    Hello, I have started to review callbacks. I found this link: http://stackoverflow.com/questions/142789/what-is-a-callback-in-c-and-how-are-they-implemented which has a good example of callback which is very similar to what we use at work. However, I have tried to get it to work, but I have many errors. #include <stdio.h> /* Is the actual function pointer? */ typedef void (*event_cb_t)(const struct event *evt, void *user_data); struct event_cb { event_cb_t cb; void *data; }; int event_cb_register(event_ct_t cb, void *user_data); static void my_event_cb(const struct event *evt, void *data) { /* do some stuff */ } int main(void) { event_cb_register(my_event_cb, &my_custom_data); struct event_cb *callback; callback->cb(event, callback->data); return 0; } I know that callback use function pointers to store an address of a function. But there is a few things that I find I don't understand. That is what is meet by "registering the callback" and "event dispatcher"? Many thanks for any advice,

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • What is wrong with this recursive Windows CMD script? It won't do Ackermann properly

    - by boost
    I've got this code that I'm trying to get to calculate the Ackermann function so that I can post it up on RosettaCode. It almost works. I thought maybe there'd be a few batch file wizards on StackOverflow. ::echo off set depth=0 :ack if %1==0 goto m0 if %2==0 goto n0 :else set /a n=%2-1 set /a depth+=1 call :ack %1 %n% set t=%errorlevel% set /a depth-=1 set /a m=%1-1 set /a depth+=1 call :ack %m% %t% set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) :m0 set/a n=%2+1 if %depth%==0 ( exit %n% ) else ( exit /b %n% ) :n0 set /a m=%1-1 set /a depth+=1 call :ack %m% %2 set t=%errorlevel% set /a depth-=1 if %depth%==0 ( exit %t% ) else ( exit /b %t% ) I use this script to test it @echo off cmd/c ackermann.cmd %1 %2 echo Ackermann of %1 %2 is %errorlevel% A sample output, for Test 1 1, gives: >test 1 1 >set depth=0 >if 1 == 0 goto m0 >if 1 == 0 goto n0 >set /a n=1-1 >set /a depth+=1 >call :ack 1 0 >if 1 == 0 goto m0 >if 0 == 0 goto n0 >set /a m=1-1 >set /a depth+=1 >call :ack 0 0 >if 0 == 0 goto m0 >set/a n=0+1 >if 2 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >if 1 == 0 (exit 1 ) else (exit /b 1 ) >set t=1 >set /a depth-=1 >set /a m=1-1 >set /a depth+=1 >call :ack 0 1 >if 0 == 0 goto m0 >set/a n=1+1 >if 1 == 0 (exit 2 ) else (exit /b 2 ) >set t=2 >set /a depth-=1 >if 0 == 0 (exit 2 ) else (exit /b 2 ) Ackermann of 1 1 is 2

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >