Tagged: tdd Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 9:14 am on October 18, 2011 Permalink | Reply
    Tags: decorator pattern, , , tdd   

    Am I writing bad tests? 

    To grow as a developer, there is nothing as good as opening up your guard and invite others to criticize your potential flaws…as well as reading a book every now and then.

    In real life, you have to be a pragmatic programmer (provided that you are, in fact, a programmer) and do what is “best” for the project, even if that means getting a project released instead of developing it to the point of perfection.

    In hobby projects, however, I more than often find myself reaching for this very perfection.

    (Note: My earlier attempts to reach perfection involved having separate, standardized regions for variables, properties, methods, constructors etc. as well as writing comments for all public members of every class. I was ruthlessly beaten out of this bad behavior by http://twitter.com/#!/nahojd – who has my eternal gratitude)

    Now, I suspect that I have become trapped in another bad behavior – the unit test everything trap. At its worst, I may not even be writing unit tests, so I invite all readers to comment on if I am out on a bad streak here.

    The standard setup

    In the standard setup, I have:

    • IGroupInviteService – the interface for the service has several methods, e.g. AcceptInvite and CreateInvite.
    • GroupInviteService – a standard implementation of IGroupInviteService that handles the core processes, with no extra addons.
    • GroupInviteServiceBehavior – a test class that tests every little part of the standard implementation.

    This setup works great. It is the extended e-mail setup below that leaves me a bit suspicious.

    The extended e-mail setup

    In the extended e-mail sending setup, I have:

    • EmailSendingGroupInviteService – facades any IGroupInviteService and sends out an e-mail when an invite is created.
    • EmailSendingGroupInviteServiceBehavior – a test class that…well, that is the problem.

    The EmailSendingGroupInviteService class

    Before moving on, let’s take a look at how the EmailSendingGroupInviteService class is implemented.

    Code for parts of the e-mail sending implementation.

    As you can see, the e-mail sending part is not yet developed 😉

    As you also can see, the methods only call the base instance. Now, let’s look at some tests.

    The EmailSendingGroupInviteServiceBehavior class

    Let’s take a look at some of the EmailSendingGroupInviteServiceBehavior tests.

    Image showin parts of the test class

    As you can see, all that I can test is that the base instance is called properly, and that the base instance result is returned.


    Testing the decorator class like this is really time-consuming, and for each new method I add, I have to write more of these tests for each decorator class. That could become a lot of useless tests.

    Well, the tests are not useless…they are just…well…I just hate having to write them 🙂

    So, this raises my final question:

    • Would it not be better to only test the stuff that differ? In this case, only keep CreateInvite_ShouldSendEmail
    Let me know what you think
    • Daniel Lee 10:52 am on October 18, 2011 Permalink | Reply

      I am going through a similar thought process. I would say that those tests that only test that the base instance was called are not worth it. There is no new logic being tested at all. It would be different if you changed the parameters and then called the base instance. If you already have tests on your base instance then that is good enough IMO.

      But I can totally understand where you are coming from. When practicing TDD it feels wrong to write a bunch of methods without writing tests for them. Maybe more coarse-grained tests in the layer above the email service class would solve this?

      This is really a .NET thing, if this was Ruby code then you’d just do this as a mixin. It’s only ceremony and not really providing any value. But I don’t know of a way to avoid this in .NET unfortunately.

      • danielsaidi 11:03 am on October 18, 2011 Permalink | Reply

        I totally agree with you…and also think it depends on the situation. In this case, where I work alone and my classes are rather clean, or when you work with people that share coding principles, then I agree that only testing the altered behavior is sufficient. However, if so is not the case, then perhaps thorough tests like these are valuable…

        …which maybe signals that you have greater problems than worrying over code coverage 🙂

        Thanks for your thoughts…and for a fast reply!

    • Daniel Lee 11:41 am on October 18, 2011 Permalink | Reply

      It’s a judgement thing (the classic ‘it depends’ answer). If you think that those forwarding methods are probably never going to change then those tests are mostly just extra baggage. But if you are pretty sure this is only step 1 then they could be valuable in the future.

      But if you have more coarse-grained tests (tests that test more than one layer) above these tests then they should break if someone changes the code here. For code this simple you don’t have to take such small steps with the tests. What do you think?

    • Daniel Persson 12:47 pm on October 18, 2011 Permalink | Reply

      If you really would like to test it, i would say only do one test – which is the setup and assert (one assert) on the expected result. No need for verify since it is verified in the setup to assert test.

      And whether you should do the tests or not, I agree with Daniel Lee. It’s probably not worth it, unless good reason. The behavior that changes from the base is the one that should be primary tested. If you overspecify, the tests will be harder to maintain and the solution it self be harder/more time consuming to change.

      • danielsaidi 1:00 pm on October 18, 2011 Permalink | Reply

        …which is exactly the situation I faced this Friday, when I ended up spending the entire afternoon adjusting one test class…which on the other hand probably had to do more with me writing bad tests 😉

    • Petter Wigle 8:11 pm on October 19, 2011 Permalink | Reply

      I think the tests tell you that the design could be improved.
      In this case I think it would have been better if the EmailSendingGroupInviteService would inherit from GroupInviteService. Then you would get rid of all the tedious delegations.
      Or you can rewrite the code in Ruby 🙂

      Otherwise I don’t think it is worth the effort to write test for code like this that is very unlikely to be broken. But if you do TDD then you’re using the tests to drive the design. The process is valuable even if the tests that come out of it is quite pointless.

  • danielsaidi 9:34 pm on September 15, 2011 Permalink | Reply
    Tags: pear, phpunit, , tdd   

    Getting PEAR and PHPUnit to work with MAMP 

    When I recently decided to start re-developing a PHP project of mine from scratch, I decided to replace SimpleTest with PHPUnit and PHPCover.

    To get familiar with PEAR, I found this great tutorial:


    However, since I managed to screw up my PEAR configurations when playing around with it a month ago, the tutorial did not work.

    Turns out that PEAR was missing from where I wanted it to be installed, and that multiple PEAR installations were scattered all over the file system. The config file pointed to one of these locations, which made the installer believe that I had the latest release installed.

    Turns out that:

    Finally, I could follow the tutorial at the top of this post. Installing PHPUnit was a breeze after the invalid PEAR settings were fixed.

    You also need to grab PHPCover, which is described at https://github.com/sebastianbergmann/phpunit/

    Now, you have to set the include paths in /etc/php.ini. Mine looks like this:

    • include_path=”.:/php/includes:/usr/lib/php:/usr/lib/php/pear

    However, since I use MAMP, this is not the file that is to be. You need to modify the ini file in MAMP’s configuration area, and add the paths there as well.

    Now, PHPUnit will work, but I am yet to be convinced. SimpleTest seems easier to setup and flexible enough to cover all test cases I need, including mocking…plus that I can ship the testing framework with the development bundle.

    Any thoughts regarding this?

  • danielsaidi 11:54 am on March 17, 2011 Permalink | Reply
    Tags: , infragistics, ioc, , tdd,   

    Infragistics – not that TDD friendly 

    I am currently working with cleaning up a WPF application that is tightly connected to Infragistics UI components. I am SOLIDifying, Dependency Injecting, IoC:ing and all those high fashion stuff. Does that turn me into a state-of-the-art developer? Time will tell 🙂

    However, since no unit tests existed when I took over responsibility for the solution, I decided to take the opportunity to wrap all code that I am rewriting in unit tests. Sadly, Infragistics does not seem to want you to test functionality that is based on their classes, since almost every class that I’ve come across so far has been internal. No public constructors, no interfaces…nothing.

    To work around the problem, I am currently wrapping everything within facade classes as well. It feels cheap and gives me a looot of additional work, but when I am done, I will at least have a shot at becoming the Joel Abrahamsson of the Infragistics universe. Not bloody likely, I know, but at least, I will probably put the resulting facade classes up for download.

    • Ivar 7:55 am on March 25, 2011 Permalink | Reply

      en annan lösning är ifall du kan gå in och pilla på assemblyt och lägga in [InternaldVisibleTo] och peka ut ditt testbinliotek.

      Ps. Trevligt i Barcelona!

      • danielsaidi 10:27 pm on March 25, 2011 Permalink | Reply

        Aaah, okej 🙂 Jag tror inte att jag kommer att göra det, eftersom projektet är rätt kort, men tack för tipset!

  • danielsaidi 11:37 am on July 5, 2010 Permalink | Reply
    Tags: , constraintexception, , , , tdd   

    ConstraintException thrown when posting empty data for non-nullable properties 

    I am currently working with model validation, using an EF4 entity model, DataAnnotations and partial classes with MetadataType connections.

    In my model, I have an Employee entity for which some of properties are non-nullable. I have also created a partial class and a meta data class for model validation, as is described in this blog post.

    This works great. The Employee class is validated properly, with minimum effort. My entities are validated with standard validation attributes as well as custom ones. Lovely.

    However, the application crashes when I post empty text input elements in my Create/Edit views. A ConstraintException is thrown before my view controller actions are executed, which means that I cannot act on the constraint exception within my action.

    The exception is caused by the fact that empty posted data will cause the corresponding model properties to be set to null, which conflicts with the non-nullable properties in the entity model.

    However, since I have custom model validation classes in which I add Required attributes to mandatory properties, I do not need the non-nullable attributes in my entity model. As such, I set the nullable property to (None)…and the ConstraintException is history!

    • Thomas 3:29 pm on September 24, 2010 Permalink | Reply

      Surely this is the incorrect approach, as it means you have no constraints set at the root database level. Which seems wrong.

      There is actually no problem with a constraint exception being thrown, that’s exactly what is supposed to happen.

      • danielsaidi 11:57 am on October 4, 2010 Permalink | Reply

        Hi Thomas! Thank you for your input, you are absolutely right. I have updated the post according to your feedback.

    • temStetLody 10:05 pm on June 21, 2011 Permalink | Reply

      Hello all! I like this forum, i found numberless interesting people on this forum.!!!

      Large Community, good all!

    • Hector 4:47 pm on June 3, 2012 Permalink | Reply

      I know, I know, it has been almost two years since this was posted.
      But just in case someone else is looking here for a solution, I just managed to get around.
      You may use the “DisplayFormat” DataAnnotation in order to override the default behavior (empty posted data causes corresponding model properties set to null), like this:

      [DisplayFormat(ConvertEmptyStringToNull = false)]

      Add the annotation for each non-nulleable attribute of the model. Then, EntityException is gone, and validation still works.

      Best regards.

      • danielsaidi 4:53 pm on June 3, 2012 Permalink | Reply

        Yeah, the post is rather old…and invalid. I should update it, but…well 😉

        Thank you so much for your comment!

  • danielsaidi 1:03 am on July 5, 2010 Permalink | Reply
    Tags: callwithmodelvalidation, , , , modelstate, , tdd   

    DataAnnotations and MetadataType fails in unit tests 

    This post describes how to solve the problem that model validation will not work for ASP.NET MVC2 (.NET 4.0) when testing a model that uses DataAnnotations and MetadataType to describe for its validation.

    First of all, ModelState.IsValid is always true, since the functionality that sets it to false if the model is invalid is never executed during the test cycle. This will cause your controllers to behave incorrectly during your tests.

    Second, MetadataType binding is ignored during the test cycle as well. This will cause the validation within it to be ignored as well, which in turn will cause the model to be valid although an object is invalid.

    My situation

    I am currently writing tests for a Create method in one of my controllers. I use NUnit as test framework. I have an EF4 Entity Model, in which I have a couple of entities. For instance, I have an Employee entity with FirstName, LastName and Ssn properties.

    To enable model validation, I create a partial Employee class in the same namespace as the EF4 entity model, then create a MetadataType class, which handles validation for the class. This approach is fully described in this blog post.

    In my EmployeeController, I have a Create method that takes an employee and tries to save it. If ModelState.IsValid is false, the controller returns the Create view again and displays the errors. If the model is valid, however, I create the employee and return the employee list.

    Easy enough. Well, when I started to write tests, I realized that ModelState.IsValid is always true, even if I provide the method with an invalid employee. Turns out that model validation is not triggered by the unit test.

    Trigger model validation within a test

    This blog post describes the ModelState.IsValid problem and provides a slick solution – the CallWithModelValidation Controller extension method.

    I added this extension method to my MVC2 project and used it instead of calling Create, as such:

       var result = controller.Create(new Employee());
       var result = controller.CallWithModelValidation(c => c.Create(new Employee()), new Employee());

    And sure enough, this causes the test to trigger model validation. The only problem is that the model validation does not catch any errors within the model, even if the model is invalid.

    After some fiddling, I noticed that this error only occurs for partial objects that uses MetadataType to specify model validation. A class that describes its validation attributes directly is validated correctly.

    Turns out that the MetadataType class is ignored within test context. Thus, the model is always considered to be valid.

    Register MetadataType connections before testing

    This blog post describes the MetadataType problem and provides a slick solution – the InstallForThisAssembly method.

    This method must be placed within the same assembly as the model, in other words not the test project. I placed it in a ControllerExtensions class file and call it at the beginning of CallWithModelValidation. This works, but will not work if you move the extension to another project.

    Run it before your tests, and everything will work “as it should”.

    Hope this helps.

  • danielsaidi 12:29 pm on June 12, 2010 Permalink | Reply
    Tags: equals, , same, tdd   

    Equals vs. same in QUnit 

    When asserting whether or not two associative arrays (or objects) are identical, I first tried to use equals:

       equals({"foo":"bar"}, {"foo":"bar"}, "");

    Turns out, this does not work (see the assertEquals definition). Instead, use same:

       same({"foo":"bar"}, {"foo":"bar"}, "");

    Still, I like the error message you reveice when using equals:

    failed, expected: { “foo”: “bar” } result: { “foo”: “bar” }

  • danielsaidi 6:45 pm on June 3, 2010 Permalink | Reply
    Tags: , , , , tdd   

    Hide successful QUnit tests 

    I am now rolling with QUnit as TDD framework for my JavaScript development. It’s not as sophisticated as say NUnit for .NET or SimpleTest for PHP, but it’s reaaally easy to get started with.

    However, a strange way of designing the test result presentation is that QUnit lists all tests, not just the ones that fails. With just a few executing tests, the resulting page looks like this:

    QUnit - Full test result presentation

    By default, QUnit lists all executing tests in a test suite

    The test suite above only includes 14 tests – imagine having maybe a hundred or so! In my opinion, this way of presenting the test result hides the essence of testing – to discover tests that fail.

    I understand that one must be able to confirm that all tests are executed, but the number of executed tests are listed in the result footer. So, I would prefer to only list the tests that fail.

    If anyone knows a built-in way of achieving this, please let me know. I chose the following approach (applies to jQuery 1.4.2 – let me know if this is out-of date):

    1. Open the qunit.js file
    2. Find the block that begins with the line:
      var li = document.createElement("li");
    3. Wrap the entire block within:
      if (bad) { ... }

    This will make QUnit only append the list element if the test is “bad”, that is if it failed. The result will look like this:

    After fiddling with the code, QUnit only lists failing tests

    Maybe there is a built-in way of making QUnit behave like this. If you know how, please leave a comment.

  • danielsaidi 11:53 am on May 28, 2010 Permalink | Reply
    Tags: , jsunit, , , tdd   

    JsUnit vs. QUnit 

    I am rewriting an old JavaScript project and will apply TDD principles when developing the new version. When browsing for various JavaScript TDD frameworks, JsUnit and QUnit seem like the two most promising candidates.

    JsUnit uses a syntax that appeals to me, as an NUnit lover. However, since I am also a big fan of jQuery, QUnit could be a better alternative, although the framework seems quite small (yet, ok, equals and same are maybe sufficient?).

    Has anyone any experience of these two frameworks and could recommend either?

    • Raj 1:31 am on September 26, 2010 Permalink | Reply

      Hi Daniel
      I had the same question for my self. Which one is better? I haven’t tried JsUnit much but I have used QUnit. It seems to me that QUnit is easy to use than JSUnit. Please see the below post for some info.


      • danielsaidi 7:40 pm on October 3, 2010 Permalink | Reply

        Well, as I wrote, I think that the JsUnit syntax feels more “for real”, but I decided to go with QUnit and I have only had good experiences with it. I think that the ok key word – ok(shouldBeTrue) and ok(!shouldBeFalse) – is a bit cheesy, but it really does the job with minimum setup. Also, it makes testing async functionality really smooth. However, I decided to tweak QUnit a bit, so that it only displays failing tests…a loooong list with everything that went ok is really not that informative to me 🙂

      • danielsaidi 7:41 pm on October 3, 2010 Permalink | Reply

        By the way, what did you think of JsSpec? Have you had the time to try it out?

  • danielsaidi 11:43 pm on April 26, 2010 Permalink | Reply
    Tags: , tdd   

    SimpleTest – Test Driven Development in PHP 

    In my daily work, working with .NET/C#, I love the way TDD and NUnit simplifies my work and allows me to focus on implementation instead of worrying about whether or not my new code will break anything already existing.

    When I develop PHP applications at home, however, I have not previously had any natural way of testing my code, since I have not had any good tool for testing my PHP code…and have not really put any effort into finding any such tool either.

    Today, however, this has changed. After starting developing Wigbi 1.0.0, I decided to browse the web and instantly found my dream tool – SimpleTest (can be downloaded at http://simpletest.org).

    It took me less than 5 minutes to download SimpleTest and use it to run my first test. It is simple (duh) and plain fantastic. If you are not into TDD, I strongly advice you to try it out…test is bliss.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc