Tagged: interface Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 9:37 pm on February 27, 2012 Permalink | Reply
    Tags: , dependency injection, , interface, inversion of control, sub routines   

    Dependency Injection gone too far? 

    I am currently working with a new version of a hobby console app of mine, that should execute certain actions depending on the input arguments. Now, I wonder if I am taking the concept of dependency injection too far.

    Which dependencies should you inject, and which should you keep as invisible dependencies?

    How does Cloney work?

    The Cloney console application will do different things depending on the input arguments it is provided with. For instance, if I enter

     cloney --clone --source=c:/MyProject --target=d:/MyNewProject

    Cloney will clone the solution according to certain rules.

    To keep the design clean and flexible, I introduced the concept of sub routines. A sub routine is a class that implement the ISubRoutine interface, which means that it can be executed using the input arguments of the console application. For instance, the CloneRoutine listens for the input arguments above, while the HelpRoutine triggers on cloney –help.

    When I start the application, Cloney fetches all ISubRoutine implementation and tries to execute each with the input arguments. Some may trigger, some may not. If no routine triggers, Cloney displays a help message.

    So, what is the problem?

    Well, there is really no problem…just different ways to do things.

    For instance, when it comes to parsing the input arguments and making them convenient to handle, I use a class call CommandLineArgumentParser, which implements ICommandLineArgumentParser. The class transforms the default string array to a dictionary, which makes it easy to map an arg key to an arg value.

    Using the class is a choice each sub routine must take. The interface just defines the following method:

     bool Run(IEnumerable<string> args)

    Yeah, that’s right. Each sub routine just acts like a program of its own. As far as the master program is concerned, it just delegates the raw argument array it receives to each sub routine. How the routine handles the arguments is entirely up to it.

    The old design – DI for all (too much?)

    Previously, the CloneRoutine had two constructors:

       public CloneRoutine()
       : this(Default.Console, Default.Translator, Default.CommandLineArgumentParser, Default.SolutionCloner) { ... } 
    
       public CloneRoutine(IConsole console, ITranslator translator, ICommandLineArgumentParser argumentParser, ISolutionCloner solutionCloner) { ... }

    Since a sub routine is created with reflection, it must provide a default constructor. Here, the default constructor use default implementations of each interface, while the custom constructor is used in the unit tests and supports full dependency injection. Each dependency is exposed and pluggable.

    So, what is the problem?

    Well, I just feel that since the command line arguments are what defines what the routine should do, letting the class behavior be entirely determined by how another component parses the input arguments, makes the class unreliable.

    If I provide the class with an implementation that returns an invalid set of arguments, even if I provide the routine with arguments that it should trigger on (and especially considering that the return value is a do-as-you-please IDictionary), the class may explode due to an invalid implementation.

    It that not bad?

    The new design – DI where I think it’s needed (enough?)

    Instead of the old design, it this not better:

       public CloneRoutine() :this(Default.Console, Default.Translator, Default.SolutionCloner) { ... }
       public CloneRoutine(IConsole console, ITranslator translator, ISolutionCloner solutionCloner) {
          ...
          this.argumentParser = Default.CommandLineArgumentParser;
          ...
       }

    This way, I depend on the choice of ICommandLineArgumentParser implementation that I have made in the Default class, but if that implementation is incorrect, my unit tests will break. The other three injections (IMO) are the ones that should be exchangable. The argument parser should not be.

    Is this good design, or am I doing something terribly bad by embedding a hard dependency, especially since all other component dependencies can be injected. Please provide me with your comments regarding this situation.

     
    • Henrik 12:05 am on February 28, 2012 Permalink | Reply

      I think this looks good! What you can do is either Property Injection (i.e. make argumentParser a public property which can be set by the test code when needed) or the Extract and Override technique (i.e make argumentParser a protected virtual property and then make a testable class that inherits from your “production” class (e.g. CloneRoutineStub : CloneRoutine) and then overrides that virtual property.

      Or am I getting your question wrong?

    • danielsaidi 7:21 am on February 28, 2012 Permalink | Reply

      Thanks for commenting, Henrik! I guess my question was this: when should you allow behavior to be injected and when should you not go down that path.

      For the example above, I think injecting the IConsole, ITranslator and ISolutionCloner implementations is fine, since they define responsibility that SHOULD be delegated by the class.

      However, I think that acting on the input arguments received in the Run method should be the responsibility of the class, and should not be injectable.

      If the routine chooses a certain component to parse arguments is absolutely fine (and I kind of have an DI model since the routine uses the Default.CommandLineArgumentParser), but it should not be exposed.

      If I allow the argument parsing behavior to be injectable, I can make the class stop working in really strange ways, since the parser has to parse the arguments in a very specific way. IMO, the argument parser differs from the other three components.

      So….do you agree? 🙂

    • Henrik 8:18 am on February 28, 2012 Permalink | Reply

      I agree! I think it’s perfectly okay! Context is king!
      It’s not a self-purpose to require dependencies to be injected.

      Maybe you want me to disagree, so we get this lovely war feeling? 🙂

      • danielsaidi 8:41 am on February 28, 2012 Permalink | Reply

        I do love the war feeling, but sometimes, getting along is a wonderful thing as well. 🙂

        System design sure is tricky sometimes. I really appreciate having you and other devs to discuss these things with.

    • Daniel Lee 9:42 am on February 28, 2012 Permalink | Reply

      I can’t see why you would ever need to switch out the command line argument parser. And as you say yourself, it feels more like core logic than a dependency.

      So you made the right decision.

      (Although, in such a small codebase as Cloney, I don’t know if this really matters all that much?)

    • danielsaidi 10:01 am on February 28, 2012 Permalink | Reply

      Nice, thanks Daniel 🙂

      I think I’ll roll with this design for now, and replace it whenever I see the need to. I do not like all these hard dependencies to the Default class – it’s like making the classes dependent on the existence of StructureMap.

      However, as you say, it IS core logic. Cloney is not a general library, so these kinds of dependencies may not be as bad as if I’d have the same design in, say, a general lib like .NExtra.

    • Johan Driessen 12:39 pm on February 28, 2012 Permalink | Reply

      If your main concern is that your unit tests will be more fragile, and break if your implementation of the argument parser is incorrect (or just changes), why don’t you just make the argumentParser-field protected virtual?

      Then you can just create a “TestableCloneRoutine” in your tests, that inherits from CloneRoutine and replaces the argumentparser with a stub, so that your tests don’t become dependant on the actual implementation, while still not making the internals visible to other classes.

      AKA “extract and override”.

      • Johan Driessen 12:41 pm on February 28, 2012 Permalink | Reply

        Actually, you would have to make argumentParser a property (still protected and virtual) and have it return Default.CommandLineArgumentParser in CloneRoutine.

        • danielsaidi 2:39 pm on February 28, 2012 Permalink

          Thanks for your input, Johan. I will try to clarify my main concern regarding the design.

          If we see to the unit tests, I think that the following expression is a good test scenario:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console”

          In my unit tests, for instance, I can then trigger the Run method with various arguments and see that my IConsole mock receives a call to WriteLine only when I provide the method with valid input.

          If, on the other hand, the argument parse behavior is exposed, the HelpRoutine will communicate that it has an IArgumentParser that parses a string[] to an IDictionary. IMO, this is not relevant.

          Furthermore, if I make the parser injectable, my test scenario would rather be expressed like this:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console if the argument parser it uses parses the array to an IDictionary where the “help” key is set to true.”

          I am not sure which test scenario I prefer. The second one is more honest, since the routine’s behavior IS based on the parser’s behavior…but is that really what I want to test?

          I considered re-adding the IArgumentParser as a constructor parameter again, just to make it possible to inject it, but I am not really sure. I see benefit with this, as I do with keeping it completely internal.

          IMO, the fact that the routine uses an ArgumentParser to parse the arguments should not be of any concern to anyone but the class. It’s the resulting behavior that should matter.

          But I have a split feeling about it all.

  • danielsaidi 4:12 pm on December 8, 2011 Permalink | Reply
    Tags: circular namespace dependencies, , implementation, interface,   

    NDepend getting my juices flowing 

    I have been using NDepend to analyse the latest version of .NExtra. The code is spotless (this is where you should detect the irony), but the analysis is highlighting some really interesting design flaws that I will probably fix in the next major version.

    First of all, almost all cases where method or type names are too long are where I have created facades or abstractions for native .NET entities, like Membership. Here, I simply add exceptions where NDepend will not raise any warnings, since the whole point with the facades are that they should mirror the native classes, to simplify switching them out with your own implementations, mocks etc.

    Second, NDepend warns for naming conventions that I do not agree with, such as that method names should start with m_. Here, I simply remove the rules, since I do not find them valid. However, when I look at it after removing the rules, I could have kept them and renamed them and make them check that my own conventions are followed. I will probably do so later on.

    Third, and the thing I learned the most from today, was that I turned out to have circular namespace dependencies, despite that I have put a lot of effort into avoiding circular namespace and assembly dependencies. The cause of the circular dependencies turned out to be between X and X/Abstractions namespaces.

    Circular dependency graph

    The base and abstraction namespaces depend on eachother

    For instance, have a look at NExtra.Geo, which contains geo location utility classes. The namespace contains entities like Position and enums like DistanceUnits, as well as implementations of the interfaces in the Abstractions sub namespace.

    Now, what happens is that the interfaces define methods that use and return types and entities in the base namespace. At the same time, the implementations in the base namespace implement the interfaces in the Abstractions namespace. And there you go, circular namespace dependencies.

    Now, in this particular case, I do not think that it is too much of a deal, but it highlights something that has annoyed me a bit while working with the .NExtra library.

    In the library’s unit tests, the test classes only knows about the interfaces, but the setup method either selects a concrete implementation or a mock for the various interfaces. For an example, look here. This force me to refer both the base namespace as well as the abstractions namespace. Once again, this is not that bad, but it raises the question…

    In the next minor version of .NExtra, I will probably get rid of the abstraction namespaces, since they add nothing but complexity. Sure, they tuck away all interfaces, but why should they?

     
    • Daniel Lee 6:40 pm on December 8, 2011 Permalink | Reply

      Were you inspired by Greg Young’s session at Öredev? I have to get into NDepend as well. Did you buy a license or you testing out the trial version?

    • danielsaidi 9:30 am on December 9, 2011 Permalink | Reply

      I have been using NDepend for a while, but yeah, Greg’s session opened up my eyes for other ways of looking at the metrics. And even if I remove some metrics (like methods should start with m_, static members with s_ etc.) it is really striking how some of the metrics really highlight design flaws that are easily corrected.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel