Search Results

Search found 11325 results on 453 pages for 'supervised methods'.

Page 89/453 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Is it okay to have many Abstract classes in your application?

    - by JoseK
    We initially wanted to implement a Strategy pattern with varying implementations of the methods in a commmon interface. These will get picked up at runtime based on user inputs. As it's turned out, we're having Abstract classes implementing 3 - 5 common methods and only one method left for a varying implementation i.e. the Strategy. Update: By many abstract classes I mean there are 6 different high level functionalities i.e. 6 packages , and each has it's Interface + AbstractImpl + (series of Actual Impl). Is this a bad design in any way? Any negative views in terms of later extensibility - I'm preparing for a code/design review with seniors.

    Read the article

  • How to configure simple game AI setting with jtable

    - by Asgard
    I'm developing an application that has methods of this kind: attackIfIsFar(); protectIfIsNear(); helpAfterDeadOf(); helpBeforeAttackOf(); etc. The initialization of my application for n players is something like player1.attackIfIsFar(player2); player2.protectIfIsNear(player4); player3.helpAfterDeadOf(player1); player4.helpBeforeAttackOf(player3); etc. I don't know how to configure a jtable that that can allow me to set the equivalent of this code-block In others words I need simply a way to create a jtable with 3 column and n row, were I can set in the column 1 and 3, the player, and in the central column one of the available methods that each player on the column 1 must invoke on each player of column 3

    Read the article

  • If you need more than 3 levels of indentation, you're screwed?

    - by jokoon
    Per the Linux kernel coding style document: The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program. What can I deduct from this quote? On top of the fact that too long methods are hard to maintain, are they hard or impossible to optimize for the compiler? I don't really understand if this quote encourages better coding practice or is really a mathematical / algorithmic sort of truth. I also read in some C++ optimizing guide that dividing up a program into more function improves its design is a common thing taught at school, but it should be not done too much, since it can turn into a lot of JMP calls (even if the compiler can inline some methods by itself).

    Read the article

  • Try/Catch or test parameters

    - by Ondra Morský
    I was recently on a job interview and I was given a task to write simple method in C# to calculate when the trains meet. The code was simple mathematical equation. What I did was that I checked all the parameters on the beginning of the method to make sure, that the code will not fail. My question is: Is it better to check the parameters, or use try/catch? Here are my thoughts: Try/catch is shorter Try/catch will work always even if you forget about some condition Catch is slow in .NET Testing parameters is probably cleaner code (Exceptions should be exceptional) Testing parameters gives you more control over return values I would prefer testing parameters in methods longer than +/- 10 lines, but what do you think about using try/catch in simple methods just like this – i.e. return (a*b)/(c+d); There are many similar questions on stackexchnage, but I am interested in this particular scenario.

    Read the article

  • "more than 3 levels of indentation, you're screwed" How should I understand this quote ?

    - by jokoon
    The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program. What can I deduct from this quote ? On top of the fact that too long methods are hard to maintain, are they hard or impossible to optimize for the compiler ? I don't really understand if this quote encourages better coding practice or is really a mathematical/algorithmic sort of truth... I also read in some C++ optimizing guide that dividing up a program into more function improves its design is a common thing taught at school, but it should be not done too much, since it can turn into a lot of JMP calls (even if the compiler can inline some methods by itself).

    Read the article

  • How is this "interface"-like structure/pattern called?

    - by Sebastian Negraszus
    Let's assume we have an XmlDoc class that contains basic functionality for dealing with an XML data structure and saving/loading data to/from a file. Now we have several subclasses, A, B and C. They all inherit from XmlDoc and add component-specific methods for setting and getting lots of data. They are like "interfaces" but also add an implementation for the signatures. Finally, we have an ABCDoc class that joins all the "interfaces" via virtual multiple inheritence and adds some ABCDoc-specific stuff, such as using XMLDoc-methods to set an appropriate doc type. We may also have an ADoc class for only saving A data. How is this pattern called? "Interface" is not really the right word since interfaces usually do not contain an implementation. Bonus points for C++ code conventions.

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Best Practice to return responses from service

    - by A9S6
    I am writing a SOAP based ASP.NET Web Service having a number of methods to deal with Client objects. e.g: int AddClient(Client c) = returns Client ID when successful List GetClients() Client GetClientInfo(int clientId) In the above methods, the return value/object for each method corresponds to the "all good" scenario i.e. A client Id will be returned if AddClient was successful or a List< of Client objects will be returned by GetClients. But what if an error occurs, how do I convey the error message to the caller? I was thinking of having a Response class: Response { StatusCode, StatusMessage, Details } where Details will hold the actual response but in that case the caller will have to cast the response every time. What are your views on the above? Is there a better solution? ---------- UPDATED ----------- Is there something new in WCF for the above? What difference will it make If I change the ASP.NET Web Service to a WCF Service?

    Read the article

  • Should I implement an interface directly or have the superclass do it?

    - by c_maker
    Is there a difference between public class A extends AbstractB implements C {...} versus... public class A extends AbstractB {...} AbstractB implements C {...} I understand that in both cases, class A will end up conforming to the interface. In the second case, AbstractB can provide implementation for interface methods in C. Is that the only difference? If I do NOT want to provide an implementation for any of the interface methods in AbstractB, which style should I be using? Does using one or the other have some hidden 'documentation' purpose?

    Read the article

  • Imperative vs. component based programming [closed]

    - by AlexW
    I've been thinking about how programming and more specifically the teaching of programming is advocated amongst the community (online). Often I've heard that Ruby and RoR is an ideal platform for learning to program. I completely disagree... RoR and Ruby are based on the application of the component based paradigm, which means they are ideal for rapid application development. This is much like the MVC model in PHP and ASP.NET But, learning a proper imperative language like Java or C/C++ (or even Perl and PHP) is the only way for a new programmer to explore logic itself, and not get too bogged down in architectural concerns like the need for separation of concerns, and the preference for components. Maybe it's a personal preference thing. I rather think that the most interesting aspects to programming are the procedural bits of code I write that actually do stuff rather than the project planning, and modelling that comes about from fully object oriented engineering or simply using the MVC model. I know this may sound confused to some of you. I feel strongly though that the best way for programming to be taught is through imperative and procedural methods. Architectural (component) methods come later, if at all. After all, none of the amazing algorithms that exist were based on OOP practice! It's all procedural code when it comes to the 'magic'. OOP is useful in creating products and utilities. Algorithms are what makes things happen, and move data around, and so imperative (and/or procedural) code are what matters most. When I see programmers recommending Ruby on Rails to newbie developers, I think it's just so wrong. Just because you write less code with Ruby does not make it easier to do! It's the opposite... you have to know loads more to appreciate its succinct nature. New coders who really want to understand the nuts and bolts of coding need to go away and figure out writing methods/functions (i.e. imperative programming) and working in procedural style, in order to grasp the fundamentals, first, before looking into architectural ways of working. So, my question is: should Ruby ever be recommended as a first language? I think no (obviously)... what arguments are there for it?

    Read the article

  • Need Information On Importing Data Into The Oracle Product Hub?

    - by LuciaC
    One of the key challenges of implementing a Master Data Management solution is importing data into the system. Oracle Product Hub offers numerous ways of importing the setup data and the actual product data.  Review all available methods to import data in the White Paper Doc ID 1504980.1 which provides details and examples of each method, discusses special cases, and provides some troubleshooting tips.The methods reviewed include:     FNDLOAD     iSetup     Interfaces and Public APIs     Import from Excel     Web Application Desktop Integrator     Webservices

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • If you need more than 3 levels of indentation, you're screwed?

    - by jokoon
    Per the Linux kernel coding style document: The answer to that is that if you need more than 3 levels of indentation, you're screwed anyway, and should fix your program. What can I deduce from this quote? On top of the fact that too long methods are hard to maintain, are they hard or impossible to optimize for the compiler? I don't really understand if this quote encourages better coding practice or is really a mathematical / algorithmic sort of truth. I also read in some C++ optimizing guide that "dividing up a program into more functions improves its design" is frequently taught in CS courses, but it should be not done too much, since it can turn into a lot of JMP calls (even if the compiler can inline some methods by itself).

    Read the article

  • What is the better design decision approach?

    - by palm snow
    I have two classes (MyFoo1 and MyFoo2) that share some common functionality. So far it does not seem like I need any polymorphic inheritence but at this point I am considering the following options: Have the common functionality in a utility class. Both of these classes call these methods from that utility class. Have an abstract class and implement common methods in that abstract class. Then derive MyFoo1 and MyFoo2 from that abstract class. Any suggestion on what would be a better design decision?

    Read the article

  • OOP PHP make separate classes or one

    - by user2956219
    I'm studying OOP PHP and working on a small personal project but I have hard time grasping some concepts. Let's say I have a list of items, each item belongs to subcategory, and each subcategory belongs to category. So should I make separate classes for category (with methods to list all categories, add new category, delete category), class for subcategories and class for items? Or should I make creating, listing and deleting categories as methods for item class? Both category and subcategory are very simple and basically consist of ID, Name and parentID (for subcategory).

    Read the article

  • System Virtualization

    Methods and applications of system virtualization using Intel virtualization technology Intel Corporation - Intel - Companies - Product Support - Hardware

    Read the article

  • iPhone SDK / Objective C Syntax Question

    - by Koppo
    To all, I was looking at the sample project from http://iphoneonrails.com/ and I saw they took the NSObject class and added methods to it for the SOAP calls. Their sample project can be downloaded from here http://iphoneonrails.com/downloads/objective_resource-1.01.zip. My question is related to my lack of knowledge on the following syntax as I haven't seen it yet in a iPhone project. There is a header file called NSObject+ObjectiveResource.h where they declare and change NSObject to have extra methods for this project. Is the "+ObjectiveResource.h" in the name there a special syntax or is that just the naming convention of the developers. Finally inside the class NSObject we have the following #import <Foundation/Foundation.h> @interface NSObject (ObjectiveResource) // Response Formats typedef enum { XmlResponse = 0, JSONResponse, } ORSResponseFormat; // Resource configuration + (NSString *)getRemoteSite; + (void)setRemoteSite:(NSString*)siteURL; + (NSString *)getRemoteUser; + (void)setRemoteUser:(NSString *)user; + (NSString *)getRemotePassword; + (void)setRemotePassword:(NSString *)password; + (SEL)getRemoteParseDataMethod; + (void)setRemoteParseDataMethod:(SEL)parseMethod; + (SEL) getRemoteSerializeMethod; + (void) setRemoteSerializeMethod:(SEL)serializeMethod; + (NSString *)getRemoteProtocolExtension; + (void)setRemoteProtocolExtension:(NSString *)protocolExtension; + (void)setRemoteResponseType:(ORSResponseFormat) format; + (ORSResponseFormat)getRemoteResponseType; // Finders + (NSArray *)findAllRemote; + (NSArray *)findAllRemoteWithResponse:(NSError **)aError; + (id)findRemote:(NSString *)elementId; + (id)findRemote:(NSString *)elementId withResponse:(NSError **)aError; // URL construction accessors + (NSString *)getRemoteElementName; + (NSString *)getRemoteCollectionName; + (NSString *)getRemoteElementPath:(NSString *)elementId; + (NSString *)getRemoteCollectionPath; + (NSString *)getRemoteCollectionPathWithParameters:(NSDictionary *)parameters; + (NSString *)populateRemotePath:(NSString *)path withParameters:(NSDictionary *)parameters; // Instance-specific methods - (id)getRemoteId; - (void)setRemoteId:(id)orsId; - (NSString *)getRemoteClassIdName; - (BOOL)createRemote; - (BOOL)createRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters; - (BOOL)createRemoteWithParameters:(NSDictionary *)parameters andResponse:(NSError **)aError; - (BOOL)destroyRemote; - (BOOL)destroyRemoteWithResponse:(NSError **)aError; - (BOOL)updateRemote; - (BOOL)updateRemoteWithResponse:(NSError **)aError; - (BOOL)saveRemote; - (BOOL)saveRemoteWithResponse:(NSError **)aError; - (BOOL)createRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)updateRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; - (BOOL)destroyRemoteAtPath:(NSString *)path withResponse:(NSError **)aError; // Instance helpers for getting at commonly used class-level values - (NSString *)getRemoteCollectionPath; - (NSString *)convertToRemoteExpectedType; //Equality test for remote enabled objects based on class name and remote id - (BOOL)isEqualToRemote:(id)anObject; - (NSUInteger)hashForRemote; @end What is the "ObjectiveResource" in the () mean for NSObject? What is that telling Xcode and the compiler about what is happening..? After that things look normal to me as they have various static and instance methods. I know that by doing this all user classes that inherit from NSObject now have all the extra methods for this project. My question is what is the parenthesis are doing after the NSObject. Is that referencing a header file, or is that letting the compiler know that this class is being over ridden. Thanks again and my apologies ahead of time if this is a dumb question but just trying to learn what I lack.

    Read the article

  • Agile: User Stories for Machine Learning Project?

    - by benjismith
    I've just finished up with a prototype implementation of a supervised learning algorithm, automatically assigning categorical tags to all the items in our company database (roughly 5 million items). The results look good, and I've been given the go-ahead to plan the production implementation project. I've done this kind of work before, so I know how the functional components of the software. I need a collection of web crawlers to fetch data. I need to extract features from the crawled documents. Those documents need to be segregated into a "training set" and a "classification set", and feature-vectors need to be extracted from each document. Those feature vectors are self-organized into clusters, and the clusters are passed through a series of rebalancing operations. Etc etc etc etc. So I put together a plan, with about 30 unique development/deployment tasks, each with time estimates. The first stage of development -- ignoring some advanced features that we'd like to have in the long-term, but aren't high enough priority to make it into the development schedule yet -- is slated for about two months worth of work. (Keep in mind that I already have a working prototype, so the final implementation is significantly simpler than if the project was starting from scratch.) My manager said the plan looked good to him, but he asked if I could reorganize the tasks into user stories, for a few reasons: (1) our project management software is totally organized around user stories; (2) all of our scheduling is based on fitting entire user stories into sprints, rather than individually scheduling tasks; (3) other teams -- like the web developers -- have made great use of agile methodologies, and they've benefited from modelling all the software features as user stories. So I created a user story at the top level of the project: As a user of the system, I want to search for items by category, so that I can easily find the most relevant items within a huge, complex database. Or maybe a better top-level story for this feature would be: As a content editor, I want to automatically create categorical designations for the items in our database, so that customers can easily find high-value data within our huge, complex database. But that's not the real problem. The tricky part, for me, is figuring out how to create subordinate user stories for the rest of the machine learning architecture. Case in point... I know that the algorithm requires two major architectural subdivisions: (A) training, and (B) classification. And I know that the training portion of the architecture requires construction of a cluster-space. All the Agile Development literature I've read seems to indicate that a user story should be the "smallest possible implementation that provides any business value". And that makes a lot of sense when designing a piece of end-user software. Start small, and then incrementally add value when users demand additional functionality. But a cluster-space, in and of itself, provides zero business value. Nor does a crawler, or a feature-extractor. There's no business value (not for the end-user, or for any of the roles internal to the company) in a partial system. A trained cluster-space is only possible with the crawler and feature extractor, and only relevant if we also develop an accompanying classifier. I suppose it would be possible to create user stories where the subordinate components of the system act as the users in the stories: As a supervised-learning cluster-space construction routine, I want to consume data from a feature extractor, so that I can exist. But that seems really weird. What benefit does it provide me as the developer (or our users, or any other stakeholders, for that matter) to model my user stories like that? Although the main story can be easily divided along architectural-component boundaries (crawler, trainer, classifier, etc), I can't think of any useful decomposition from a user's perspective. What do you guys think? How do you plan Agile user stories for sophisticated, indivisible, non-user-facing components?

    Read the article

  • Dual AJAX Requests at different times

    - by Nik
    Alright, I'm trying to make an AJAX Chat system that polls the chat database every 400ms. That part is working, the part of which isn't is the Active User List. When I try to combine the two requests, the first two requests are made, then the whole thing snowballs and the usually timed (12 second) Active User List request starts updating every 1ms and the first request NEVER happens again. Displayed is the entire AJAX code for both requests: var waittime=400;chatmsg=document.getElementById("chatmsg"); room = document.getElementById("roomid").value; chatmsg.focus() document.getElementById("chatwindow").innerHTML = "loading..."; document.getElementById("userwindow").innerHTML = "Loading User List..."; var xmlhttp = false; var xmlhttp2 = false; var xmlhttp3 = false; function ajax_read(url) { if(window.XMLHttpRequest){ xmlhttp=new XMLHttpRequest(); if(xmlhttp.overrideMimeType){ xmlhttp.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp.onreadystatechange = function() { if (xmlhttp.readyState==4) { document.getElementById("chatwindow").innerHTML = xmlhttp.responseText; setTimeout("ajax_read('methods.php?method=r&room=" + room +"')", waittime); } } xmlhttp.open('GET',url,true); xmlhttp.send(null); } function user_read(url) { if(window.XMLHttpRequest){ xmlhttp3=new XMLHttpRequest(); if(xmlhttp3.overrideMimeType){ xmlhttp3.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp3=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp3=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp3) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp3.onreadystatechange = function() { if (xmlhttp3.readyState==4) { document.getElementById("userwindow").innerHTML = xmlhttp3.responseText; setTimeout("ajax_read('methods.php?method=u&room=" + room +"')", 12000); } } xmlhttp3.open('GET',url,true); xmlhttp3.send(null); } function ajax_write(url){ if(window.XMLHttpRequest){ xmlhttp2=new XMLHttpRequest(); if(xmlhttp2.overrideMimeType){ xmlhttp2.overrideMimeType('text/xml'); } } else if(window.ActiveXObject){ try{ xmlhttp2=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try{ xmlhttp2=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e){ } } } if(!xmlhttp2) { alert('Giving up :( Cannot create an XMLHTTP instance'); return false; } xmlhttp2.open('GET',url,true); xmlhttp2.send(null); } function submit_msg(){ nick = document.getElementById("chatnick").value; msg = document.getElementById("chatmsg").value; document.getElementById("chatmsg").value = ""; ajax_write("methods.php?method=w&m=" + msg + "&n=" + nick + "&room=" + room + ""); } function keyup(arg1) { if (arg1 == 13) submit_msg(); } var intUpdate = setTimeout("ajax_read('methods.php')", waittime); var intUpdate = setTimeout("user_read('methods.php')", waittime);

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >