Search Results

Search found 77 results on 4 pages for 'srp'.

Page 1/4 | 1 2 3 4  | Next Page >

  • Composite-like pattern and SRP violation

    - by jimmy_keen
    Recently I've noticed myself implementing pattern similar to the one described below. Starting with interface: public interface IUserProvider { User GetUser(UserData data); } GetUser method's pure job is to somehow return user (that would be an operation speaking in composite terms). There might be many implementations of IUserProvider, which all do the same thing - return user basing on input data. It doesn't really matter, as they are only leaves in composite terms and that's fairly simple. Now, my leaves are used by one own them all composite class, which at the moment follows this implementation: public interface IUserProviderComposite : IUserProvider { void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider); } public class UserProviderComposite : IUserProviderComposite { public User GetUser(SomeUserData data) ... public void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider) ... } Idea behind UserProviderComposite is simple. You register providers, and this class acts as a reusable entry-point. When calling GetUser, it will use whatever registered provider matches predicate for requested user data (if that helps, it stores key-value map of predicates and providers internally). Now, what confuses me is whether RegisterProvider method (brings to mind composite's add operation) should be a part of that class. It kind of expands its responsibilities from providing user to also managing providers collection. As far as my understanding goes, this violates Single Responsibility Principle... or am I wrong here? I thought about extracting register part into separate entity and inject it to the composite. As long as it looks decent on paper (in terms of SRP), it feels bit awkward because: I would be essentially injecting Dictionary (or other key-value map) ...or silly wrapper around it, doing nothing more than adding entires This won't be following composite anymore (as add won't be part of composite) What exactly is the presented pattern called? Composite felt natural to compare it with, but I realize it's not exactly the one however nothing else rings any bells. Which approach would you take - stick with SRP or stick with "composite"/pattern? Or is the design here flawed and given the problem this can be done in a better way?

    Read the article

  • Design patterns to avoiding breaking the SRP while performing heavy data logging

    - by Kazark
    A class that performs both computations and data logging seems to have at least two responsibilities. Given a system for which the specifications require heavy data logging, what kind of design patterns or architectural patterns can be used to avoid bloating all the classes with logging calls every time they compute something? The decorator pattern be used (e.g. Interpolator decorated to LoggingInterpolator), but it seems that would result in a situation hardly more desirable in which almost every major class would need to be decorated with logging.

    Read the article

  • When following SRP, how should I deal with validating and saving entities?

    - by Kristof Claes
    I've been reading Clean Code and various online articles about SOLID lately, and the more I read about it, the more I feel like I don't know anything. Let's say I'm building a web application using ASP.NET MVC 3. Let's say I have a UsersController with a Create action like this: public class UsersController : Controller { public ActionResult Create(CreateUserViewModel viewModel) { } } In that action method I want to save a user to the database if the data that was entered is valid. Now, according to the Single Responsibility Principle an object should have a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. Since validation and saving to the database are two separate responsibilities, I guess I should create to separate class to handle them like this: public class UsersController : Controller { private ICreateUserValidator validator; private IUserService service; public UsersController(ICreateUserValidator validator, IUserService service) { this.validator = validator; this.service= service; } public ActionResult Create(CreateUserViewModel viewModel) { ValidationResult result = validator.IsValid(viewModel); if (result.IsValid) { service.CreateUser(viewModel); return RedirectToAction("Index"); } else { foreach (var errorMessage in result.ErrorMessages) { ModelState.AddModelError(String.Empty, errorMessage); } return View(viewModel); } } } That makes some sense to me, but I'm not at all sure that this is the right way to handle things like this. It is for example entirely possible to pass an invalid instance of CreateUserViewModel to the IUserService class. I know I could use the built in DataAnnotations, but what when they aren't enough? Image that my ICreateUserValidator checks the database to see if there already is another user with the same name... Another option is to let the IUserService take care of the validation like this: public class UserService : IUserService { private ICreateUserValidator validator; public UserService(ICreateUserValidator validator) { this.validator = validator; } public ValidationResult CreateUser(CreateUserViewModel viewModel) { var result = validator.IsValid(viewModel); if (result.IsValid) { // Save the user } return result; } } But I feel I'm violating the Single Responsibility Principle here. How should I deal with something like this?

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • TLS-SRP ciphersuites support in browsers

    - by dag
    i'm doing some research on how browsers support TLS-SRP (RFC5054). I know that TLS-SRP is implemented in GnuTLS, OpenSSL as of release 1.0.1, Apache mod_gnutls, cURL, TLS Lite and SecureBlackbox. I don't find any fresh source of information, only this from 2011: http://sim.ivi.co/2011/07/compare-tls-cipher-suites-for-web.html I'm testing them manually at the moment, but as far as i know nobody seems to support it. My interest is then in understanding if browsers are planning to support these ciphersuites in the future, apart from the current state. Actual findings (i'm sorry i can't include more than 2 links): Firefox: BugZilla bug id: 405155 IE: Microsoft connect Bug ID:788412 , date:22/05/2013 (closed) Chromium/Chrome: the interesting work by quinn slack http://qslack.com/2011/04/tls-srp-in-chrome-announcement/ Chromium code review: 6804032 Any other help?

    Read the article

  • Single Responsibility Principle - How Can I Avoid Code Fragmentation?

    - by Dean Chalk
    I'm working on a team where the team leader is a virulent advocate of SOLID development principles. However, he lacks a lot of experience in getting complex software out of the door. We have a situation where he has applied SRP to what was already quite a complex code base, which has now become very highly fragmented and difficult to understand and debug. We now have a problem not only with code fragmentation, but also encapsulation, as methods within a class that may have been private or protected have been judged to represent a 'reason to change' and have been extracted to public or internal classes and interfaces which is not in keeping with the encapsulation goals of the application. We have some class constructors which take over 20 interface parameters, so our IoC registration and resolution is becoming a monster in its own right. I want to know if there is any 'refactor away from SRP' approach we could use to help fix some of these issues. I have read that it doesn't violate SOLID if I create a number of empty courser-grained classes that 'wrap' a number of closely related classes to provide a single-point of access to the sum of their functionality (i.e. mimicking a less overly SRP'd class implementation). Apart from that, I cannot think of a solution which will allow us to pragmatically continue with our development efforts, while keeping everyone happy. Any suggestions ?

    Read the article

  • May I give a single class multiple responsibilities if only one will ever be reusable?

    - by lnluis
    To the extent that I understand the Single Responsibility Principle, a SINGLE class must only have one responsibility. We use this so that we can reuse other functionalities in other classes and not affect the whole class. My question is: what if the entity has only one purpose that really interacts with the system, and that purpose won't change? Do you have to separate the implementations of your methods into another class and just instantiate those from your entity class? Or to put it another way... Is it ok to break the SRP if you know those functions will not be reusable in the future? Or is it better to assume that we do not know if the functionalities of these methods will be reusable or not, and so just abstract them to other classes?

    Read the article

  • Single Responsibility Principle Implementation

    - by Mike S
    In my spare time, I've been designing a CMS in order to learn more about actual software design and architecture, etc. Going through the SOLID principles, I already notice that ideas like "MVC", "DRY", and "KISS", pretty much fall right into place. That said, I'm still having problems deciding if one of two implementations is the best choice when it comes to the Single Responsibility Principle. Implementation #1: class User getName getPassword getEmail // etc... class UserManager create read update delete class Session start stop class Login main class Logout main class Register main The idea behind this implementation is that all user-based actions are separated out into different classes (creating a possible case of the aptly-named Ravioli Code), but following the SRP to a "tee", almost literally. But then I thought that it was a bit much, and came up with this next implementation class UserView extends View getLogin //Returns the html for the login screen getShortLogin //Returns the html for an inline login bar getLogout //Returns the html for a logout button getRegister //Returns the html for a register page // etc... as needed class UserModel extends DataModel implements IDataModel // Implements no new methods yet, outside of the interface methods // Haven't figured out anything special to go here at the moment // All CRUD operations are handled by DataModel // through methods implemented by the interface class UserControl extends Control implements IControl login logout register startSession stopSession class User extends DataObject getName getPassword getEmail // etc... This is obviously still very organized, and still very "single responsibility". The User class is a data object that I can manipulate data on and then pass to the UserModel to save it to the database. All the user data rendering (what the user will see) is handled by UserView and it's methods, and all the user actions are in one space in UserControl (plus some automated stuff required by the CMS to keep a user logged in or to ensure that they stay out.) I personally can't think of anything wrong with this implementation either. In my personal feelings I feel that both are effectively correct, but I can't decide which one would be easier to maintain and extend as life goes on (despite leaning towards Implementation #1.) So what about you guys? What are your opinions on this? Which one is better? What basics (or otherwise, nuances) of that principle have I missed in either design?

    Read the article

  • SRP & "axis of change"?

    - by lance
    I'm reading Bob Martin's principles of OOD, specifically the SRP text, and I understand the spirit of what it's saying pretty well, but I don't quite understand a particular phrasing, from page 2 of the link (page 150 of the book): I paraphrase: It is important to separate these two responsibilities into separate classes because each responsibility is an axis of change. What exactly is meant here by "axis of change"?

    Read the article

  • Aren't Information Expert / Tell Don't Ask at odds with Single Responsibility Principle?

    - by moffdub
    It is probably just me, which is why I'm asking the question. Information Expert, Tell Don't Ask, and SRP are often mentioned together as best practices. But I think they are at odds. Here is what I'm talking about: Code that favors SRP but violates Tell Don't Ask, Info Expert: Customer bob = ...; // TransferObjectFactory has to use Customer's accessors to do its work, // violates Tell Don't Ask CustomerDTO dto = TransferObjectFactory.createFrom(bob); Code that favors Tell Don't Ask / Info Expert but violates SRP: Customer bob = ...; // Now Customer is doing more than just representing the domain concept of Customer, // violates SRP CustomerDTO dto = bob.toDTO(); If they are indeed at odds, that's a vindication of my OCD. Otherwise, please fill me in on how these practices can co-exist peacefully. Thank you. Edit: someone wants a definition of the terms - Information Expert: objects that have the data needed for the operation should host the operation Tell Don't Ask: don't ask objects for data in order to do work; tell the objects to do the work Single Responsibility Principle: each object should have a narrowly defined responsibility

    Read the article

  • How do I prove or disprove "god" objects are wrong?

    - by honestduane
    Problem Summary: Long story short, I inherited a code base and an development team I am not allowed to replace and the use of God Objects is a big issue. Going forward, I want to have us re-factor things but I am getting push-back from the teams who want to do everything with God Objects "because its easier" and this means I would not be allowed to re-factor. I pushed back citing my years of dev experience, that I'm the new boss who was hired to know these things, etc, and so did the third party offshore companies account sales rep, and this is now at the executive level and my meeting is tomorrow and I want to go in with a lot of technical ammo to advocate best practices because I feel it will be cheaper in the long run (And I personally feel that is what the third party is worried about) for the company. My issue is from a technical level, I know its good long term but I'm having trouble with the ultra short term and 6 months term, and while its something I "know" I cant prove it with references and cited resources outside of one person (Robert C. Martin, aka Uncle Bob), as that is what I am being asked to do as I have been told having data from one person and only one person (Robert C Martin) is not good enough of an argument. Question: What are some resources I can cite directly (Title, year published, page number, quote) by well known experts in the field that explicitly say this use of "God" Objects/Classes/Systems is bad (or good, since we are looking for the most technically valid solution)? Research I have already done: I have a number of books here and I have searched their indexes for the use of the words "god object" and "god class". I found that oddly its almost never used and the copy of the GoF book I have for example, never uses it (At least according to the index in front of me) but I have found it in 2 books per the below, but I want more I can use. I checked the Wikipedia page for "God Object" and its currently a stub with little reference links so although I personally agree with that it says, It doesn't have much I can use in an environment where personal experience is not considered valid. The book cited is also considered too old to be valid by the people I am debating these technical points with as the argument they are making is that "it was once thought to be bad but nobody could prove it, and now modern software says "god" objects are good to use". I personally believe that this statement is incorrect, but I want to prove the truth, whatever it is. In Robert C Martin's "Agile Principles, Patterns, and Practices in C#" (ISBN: 0-13-185725-8, hardcover) where on page 266 it states "Everybody knows that god classes are a bad idea. We don't want to concentrate all the intelligence of a system into a single object or a single function. One of the goals of OOD is the partitioning and distribution of behavior into many classes and many function." -- And then goes on to say sometimes its better to use God Classes anyway sometimes (Citing micro-controllers as an example). In Robert C Martin's "Clean Code: A Handbook of Agile Software Craftsmanship" page 136 (And only this page) talks about the "God class" and calls it out as a prime example of a violation of the "classes should be small" rule he uses to promote the Single Responsibility Principle" starting on on page 138. The problem I have is all my references and citations come from the same person (Robert C. Martin), and am from the same single person/source. I am being told that because he is just one guy, my desire to not use "God Classes" is invalid and not accepted as a standard best practice in the software industry. Is this true? Am I doing things wrong from a technical perspective by trying to keep to the teaching of Uncle Bob? God Objects and Object Oriented Programming and Design: The more I think of this the more I think this is more something you learn when you study OOP and its never explicitly called out; Its implicit to good design is my thinking (Feel free to correct me, please, as I want to learn), The problem is I "know" this, but but not everybody does, so in this case its not considered a valid argument because I am effectively calling it out as universal truth when in fact most people are statistically ignorant of it since statistically most people are not programmers. Conclusion: I am at a loss on what to search for to get the best additional results to cite, since they are making a technical claim and I want to know the truth and be able to prove it with citations like a real engineer/scientist, even if I am biased against god objects due to my personal experience with code that used them. Any assistance or citations would be deeply appreciated.

    Read the article

  • IValidatableObject vs Single Responsibility

    - by Boris Yankov
    I like the extnesibility point of MVC, allowing view models to implement IValidatableObject, and add custom validation. I try to keep my Controllers lean, having this code be the only validation logic: if (!ModelState.IsValid) return View(loginViewModel); For example a login view model implements IValidatableObject, gets ILoginValidator object via constructor injection: public interface ILoginValidator { bool UserExists(string email); bool IsLoginValid(string userName, string password); } It seems that Ninject, injecting instances in view models isn't really a common practice, may be even an anti-pattern? Is this a good approach? Is there a better one?

    Read the article

  • Designing and refactoring of payment logic

    - by jokklan
    Im currently working on an application that helps users to coordinate dinner clubs and all related accounting. (A dinner club is where people in a group, take turns to cook for the rest and then you pay a small amount to participate. This is pretty normal in dorms and colleges where im from). However there is some different models that all have a price and the accounting aspect is therefore a little spread. We both have DinnerClub, ShoppingItem and are about to implement the third Payment when users pay their debts (or get refunded for expenses). Each of these have a "price" attribute and a users expense (that he or she needs refunded) is calculated by the total of these "prices" minus what other users have bought and he or she have used/participated in. My question is then if someone have some hints to refactor this bring all this behavior together in one place? For now have i thought about a Transaction class that are responsible for this behaviour, but I'm a little worried about the performance impact on having to query for another polymorphic record each time i want to show the price on dinner clubs and shopping items (i have a standard index page with a list for both so it's a lot of extra records being queried)...

    Read the article

  • Single Responsibility Principle: Responsibility unknown

    - by lurkerbelow
    I store sessions in a SessionManager. The session manager has a dependency to ISessionPersister. SessionManager private readonly ISessionPersister sessionPersister; public SessionManager(ISessionPersister sessionPersister) { this.sessionPersister = sessionPersister; } ISessionPersister public interface ISessionPersister : IDisposable { void PersistSessions(Dictionary<string, ISession> sessions); } Q: If my application shuts down how / where do I call PersistSessions? Who is responsible? First Approach: Use Dispose in SessionManager protected virtual void Dispose(bool disposing) { if (disposing) { if (this.sessionPersister != null && this.sessionMap != null && this.sessionMap.Count > 0) { this.sessionPersister.PersistSessions(this.sessionMap); } } } Is that the way to go or are there any better solutions?

    Read the article

  • I know of three ways in which SRP helps reduce coupling. Are there even more? [closed]

    - by user1483278
    I'd like to figure all the possible ways SRP helps us reduce coupling. Thus far I can think of three: 1) If class A has more than one responsibility, these responsibilities become coupled and as such changes to one of these responsibilities may require changes to other of A's responsibilities. 2) Related functionality usually needs to be changed for the same reason and by grouping it togerther in a single class, the changes can be made in as few places as possible ( at best changes only need be made to the class which groups together these functionalities) 3) Assuming class A performs two tasks ( thus may change for two reasons ), then number of classes utilising A will be greater than if A performed just a single task ( reason being that some classes will need A to perform first task, other will need A for second task, and still others will utilise it for both tasks ).This also means that when A breaks, the number of classes ( utilising A ) being impaired will be greater than if A performed just a single task. Can SRP also help reduce coupling in any other way, not described above? Thank you

    Read the article

  • shared library under ubuntu

    - by Hema Joshi
    hi ,i have compiled srp-2.1.2 under ubuntu using make ,it creat a file libsrp.a. can any one tell me how can i use libsrp.a as shared library?.i want to use libsrp in a c# file under ubuntu by using dllimport. thanks

    Read the article

  • How do you determine how coarse or fine-grained a 'responsibility' should be when using the single r

    - by Mark Rogers
    In the SRP, a 'responsibility' is usually described as 'a reason to change', so that each class (or object?) should have only one reason someone should have to go in there and change it. But if you take this to the extreme fine-grain you could say that an object adding two numbers together is a responsibility and a possible reason to change. Therefore the object should contain no other logic, because it would produce another reason for change. I'm curious if there is anyone out there that has any strategies for 'scoping', the single-responsibility principle that's slightly less objective?

    Read the article

  • Is Form validation and Business validation too much?

    - by Robert Cabri
    I've got this question about form validation and business validation. I see a lot of frameworks that use some sort of form validation library. You submit some values and the library validates the values from the form. If not ok it will show some errors on you screen. If all goes to plan the values will be set into domain objects. Here the values will be or, better said, should validated (again). Most likely the same validation in the validation library. I know 2 PHP frameworks having this kind of construction Zend/Kohana. When I look at programming and some principles like Don't Repeat Yourself (DRY) and single responsibility principle (SRP) this isn't a good way. As you can see it validates twice. Why not create domain objects that do the actual validation. Example: Form with username and email form is submitted. Values of the username field and the email field will be populated in 2 different Domain objects: Username and Email class Username {} class Email {} These objects validate their data and if not valid throw an exception. Do you agree? What do you think about this aproach? Is there a better way to implement validations? I'm confused about a lot of frameworks/developers handling this stuff. Are they all wrong or am I missing a point? Edit: I know there should also be client side kind of validation. This is a different ballgame in my Opinion. If You have some comments on this and a way to deal with this kind of stuff, please provide.

    Read the article

  • Psuedo-Backwards Builder Pattern?

    - by Avid Aardvark
    In a legacy codebase I have a very large class with far too many fields/responsibilities. Imagine this is a Pizza object. It has highly granular fields like: hasPepperoni hasSausage hasBellPeppers I know that when these three fields are true, we have a Supreme pizza. However, this class is not open for extension or change, so I can't add a PizzaType, or isSupreme(), etc. Folks throughout the codebase duplicate the same "if(a && b && c) then isSupreme)" logic all over place. This issue comes up for quite a few concepts, so I'm looking for a way to deconstruct this object into many subobjects, e.g. a pseudo-backwards Builder Pattern. PizzaType pizzaType = PizzaUnbuilder.buildPizzaType(Pizza); //PizzaType.SUPREME Dough dough = PizzaUnbuilder.buildDough(Pizza); Is this the right approach? Is there a pattern for this already? Thanks!

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • Is this a violation of the single responsiblity principle?

    - by L. Moser
    I have the following method and interface: public object ProcessRules(List<IRule> rules) { foreach(IRule rule in rules) { if(EvaluateExpression(rule.Exp) == true) return rule.Result; } //Some error handling here for not hitting any rules } public interface IRule { Expression Exp; Object Result; int Precedence; } Because rules have a precedence, they should actually never be processed out of order. This leads me with (I think) three solutions: Sort rules before passing them into the evaluator. Change the parameter type to something that enforces a sort order. Sort within the evaluator. I like option 3 because it always ensures that it is sorted and I like option 1 because it seems more cohesive. And option 2 seems like a good compromise. Is a scenario like this context specific/subjective, or is there really a best practice to be applied here?

    Read the article

  • Help understanding the Single Responsibility Principle

    - by user204588
    I'm trying to understand what a responsibility actually is so I want to use an example of something I'm currently working on. I have a app that imports product information from one system to another system. The user of the apps gets to choose various settings for which product fields in one system that want to use in the other system. So I have a class, say ProductImporter and it's responsibility is to import products. This class is large, probably too large. The methods in this class are complex and would be for example, getDescription. This method doesn't simply grab a description from the other system but sets a product description based on various settings set by the user. If I were to add a setting and a new way to get a description, this class could change. So, is that two responsibilities? Is there one that imports products and one that gets a description. It would seem this way, almost every method I have would be in it's own class and that seems like overkill. I really need a good description of this principle because it's hard for me to completely understand. I don't want needless complexity.

    Read the article

1 2 3 4  | Next Page >