Updates from October, 2011 Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 9:14 am on October 18, 2011 Permalink | Reply
    Tags: decorator pattern, , ,   

    Am I writing bad tests? 

    To grow as a developer, there is nothing as good as opening up your guard and invite others to criticize your potential flaws…as well as reading a book every now and then.

    In real life, you have to be a pragmatic programmer (provided that you are, in fact, a programmer) and do what is “best” for the project, even if that means getting a project released instead of developing it to the point of perfection.

    In hobby projects, however, I more than often find myself reaching for this very perfection.

    (Note: My earlier attempts to reach perfection involved having separate, standardized regions for variables, properties, methods, constructors etc. as well as writing comments for all public members of every class. I was ruthlessly beaten out of this bad behavior by http://twitter.com/#!/nahojd – who has my eternal gratitude)

    Now, I suspect that I have become trapped in another bad behavior – the unit test everything trap. At its worst, I may not even be writing unit tests, so I invite all readers to comment on if I am out on a bad streak here.

    The standard setup

    In the standard setup, I have:

    • IGroupInviteService – the interface for the service has several methods, e.g. AcceptInvite and CreateInvite.
    • GroupInviteService – a standard implementation of IGroupInviteService that handles the core processes, with no extra addons.
    • GroupInviteServiceBehavior – a test class that tests every little part of the standard implementation.

    This setup works great. It is the extended e-mail setup below that leaves me a bit suspicious.

    The extended e-mail setup

    In the extended e-mail sending setup, I have:

    • EmailSendingGroupInviteService – facades any IGroupInviteService and sends out an e-mail when an invite is created.
    • EmailSendingGroupInviteServiceBehavior – a test class that…well, that is the problem.

    The EmailSendingGroupInviteService class

    Before moving on, let’s take a look at how the EmailSendingGroupInviteService class is implemented.

    Code for parts of the e-mail sending implementation.

    As you can see, the e-mail sending part is not yet developed 😉

    As you also can see, the methods only call the base instance. Now, let’s look at some tests.

    The EmailSendingGroupInviteServiceBehavior class

    Let’s take a look at some of the EmailSendingGroupInviteServiceBehavior tests.

    Image showin parts of the test class

    As you can see, all that I can test is that the base instance is called properly, and that the base instance result is returned.

    Conclusion

    Testing the decorator class like this is really time-consuming, and for each new method I add, I have to write more of these tests for each decorator class. That could become a lot of useless tests.

    Well, the tests are not useless…they are just…well…I just hate having to write them 🙂

    So, this raises my final question:

    • Would it not be better to only test the stuff that differ? In this case, only keep CreateInvite_ShouldSendEmail
    Let me know what you think
     
    • Daniel Lee 10:52 am on October 18, 2011 Permalink | Reply

      I am going through a similar thought process. I would say that those tests that only test that the base instance was called are not worth it. There is no new logic being tested at all. It would be different if you changed the parameters and then called the base instance. If you already have tests on your base instance then that is good enough IMO.

      But I can totally understand where you are coming from. When practicing TDD it feels wrong to write a bunch of methods without writing tests for them. Maybe more coarse-grained tests in the layer above the email service class would solve this?

      This is really a .NET thing, if this was Ruby code then you’d just do this as a mixin. It’s only ceremony and not really providing any value. But I don’t know of a way to avoid this in .NET unfortunately.

      • danielsaidi 11:03 am on October 18, 2011 Permalink | Reply

        I totally agree with you…and also think it depends on the situation. In this case, where I work alone and my classes are rather clean, or when you work with people that share coding principles, then I agree that only testing the altered behavior is sufficient. However, if so is not the case, then perhaps thorough tests like these are valuable…

        …which maybe signals that you have greater problems than worrying over code coverage 🙂

        Thanks for your thoughts…and for a fast reply!

    • Daniel Lee 11:41 am on October 18, 2011 Permalink | Reply

      It’s a judgement thing (the classic ‘it depends’ answer). If you think that those forwarding methods are probably never going to change then those tests are mostly just extra baggage. But if you are pretty sure this is only step 1 then they could be valuable in the future.

      But if you have more coarse-grained tests (tests that test more than one layer) above these tests then they should break if someone changes the code here. For code this simple you don’t have to take such small steps with the tests. What do you think?

    • Daniel Persson 12:47 pm on October 18, 2011 Permalink | Reply

      If you really would like to test it, i would say only do one test – which is the setup and assert (one assert) on the expected result. No need for verify since it is verified in the setup to assert test.

      And whether you should do the tests or not, I agree with Daniel Lee. It’s probably not worth it, unless good reason. The behavior that changes from the base is the one that should be primary tested. If you overspecify, the tests will be harder to maintain and the solution it self be harder/more time consuming to change.

      • danielsaidi 1:00 pm on October 18, 2011 Permalink | Reply

        …which is exactly the situation I faced this Friday, when I ended up spending the entire afternoon adjusting one test class…which on the other hand probably had to do more with me writing bad tests 😉

    • Petter Wigle 8:11 pm on October 19, 2011 Permalink | Reply

      I think the tests tell you that the design could be improved.
      In this case I think it would have been better if the EmailSendingGroupInviteService would inherit from GroupInviteService. Then you would get rid of all the tedious delegations.
      Or you can rewrite the code in Ruby 🙂

      Otherwise I don’t think it is worth the effort to write test for code like this that is very unlikely to be broken. But if you do TDD then you’re using the tests to drive the design. The process is valuable even if the tests that come out of it is quite pointless.

  • danielsaidi 9:16 am on October 6, 2011 Permalink | Reply
    Tags: , cql, , static   

    Tweaking the NDepend CQL rules to leverage awesome power 

    I have previously written about automating and scheduling NDepend for a set of .NET solutions.

    After getting into the habit of using NDepend to find code issues instead of going through the code by hand (which I still will do, but a little help does not hurt), the power of CQL grows on me.

    For instance, one big problem that I have wrestled with is that our legacy code contains static fields for non-static-should-be properties. In web context. Enough said.

    Prior to using CQL, I used to search for “static” for the entire solution, go through the search result (which, naturally, also included fully valid static methods and properties) and…well, it really did not work.

    Yesterday, when digging into the standard CQL rules to get a better understanding of the NDepend analysis, I noticed the following standard CQL:

    // <Name>Static fields should be prefixed with a 's_'</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^s_" AND 
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    Although NDepend’s naming conventions do not quite fit my conventions, this rule is just plain awesome. I just had to edit the CQL to

    // <Name>Static fields should not exist...mostly</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    and voilá – NDepend will now automatically find all static fields within my solution…and ignore any naming convention.

    Since this got me going, I also went ahead to modify the following rule

    // <Name>Instance fields should be prefixed with a 'm_'</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^m_" AND 
     !IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    to instead require that fields are camel cased (ignoring the static condition as well):

    // <Name>Instance fields should be camelCase</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^[a-z]" AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject

    Two small changes to the original setup, but awesomely helpful. Another great thing is that when you edit the queries in VisualNDepend, you get immediate, visual feedback to how the rule applies to the entire solution.

    So, now I can start tweaking the standard CQL rules to that they conform to my conventions. However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

    I will investigate this further and write a blog post if I happen to solve it.

     
    • Patrick Smacchia 2:31 pm on October 6, 2011 Permalink | Reply

      >However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

      There is no simple way yet to share a set of custom rules accross several projects.
      The options are:
      A) to copy/paste them accross NDepend running instances,
      B) to tweak the simple .ndproj project XML somehow to share the rules.

      We are aware of this limitation and are working on a feature that will make this easier.

      • danielsaidi 10:41 pm on October 10, 2011 Permalink | Reply

        Hi Patrick! I just wanted to let you know that today’s analysis worked great, without any problems whatsoever. My computer started up the bat script automatically, which checked out and built the latest source code, then ran an analys for each of the seven solutions…and finally posted the result on an internal server. Good stuff! 🙂

    • danielsaidi 3:44 pm on October 6, 2011 Permalink | Reply

      It is really not a problem – if the rule addresses some convention that the code does not share, one will get numerous warnings and can then decide whether or not the warnings are an issue or not. Also, I have modified the rules a bit differently in some projects, so maybe it is a good thing that one has to do these modifications for each project.

      I will gather my rules in simple text files that I keep under version control, together with my NDepend projects. I will probably also add the more general ones as resources to my .NExtra project.

      This is fun! 🙂

  • danielsaidi 9:51 pm on October 5, 2011 Permalink | Reply
    Tags: branch, , github api, , tag   

    Grab the latest version number from a GitHub repository 

    I am currently hosting several repositories at GitHub. For each, I have also created a GitHub gh-pages branch, so that I have a public site for the project (the gh-pages branch), as well as a project site (the main repository page).

    The goal of the public site is to present the repository and make it easy to download the latest release, while the goal of the project web site is to provide people who have already begun using the repository with information, code etc.

    One example where I do this is at the public Cloney web site:

    As you can see, each web site features a Download button. Before tonight, none featured a version number. So, my thought was “how do I grab the latest version number from the GitHub repository”. The answer is really simple – use the GitHub API.

    First – publish new versions as tags

    For the example in this post to work, each version must be pushed as a tag to the GitHub repository. This is really easy to do.

    Let’s say that you have a new version (let’s say..hmmmm….2.1.0.5) of your great project and the master branch has been pushed to your GitHub repo. Now, just type the following (provided that you call the GitHub remote origin):

       git tag 2.1.0.5
       git push origin 2.1.0.5

    This will create a new tag with the version number and push it to the GitHub repository.

    Also worth mentioning is that this method requires that each new tag name have a string value that is greater than one preceding it. If you name one tag “release 0.1.0.0” and another “2.1.5.0”, the first will be returned…and you do not want that, do you?

    Then – use the GitHub API to grab all tags

    The GitHub API is really slick, and let’s you do most anything possible. You can find all the info you need at http://develop.github.com/p/repo.html. However, instead of jQuerying the API directly, I decided to try fitzgen’s JavaScript github-api library.

    To grab all tags for a certain repository, you just have write the following:

       var repo = new gh.repo("danielsaidi", "Facadebook");
       repo.tags(function(result){ alert(JSON.stringify(result)); });

    Wow, that was easy! Now, how do I grab the latest version number from the result?

    Finally – add some candy ontop

    Since I will use this solution for all my GitHub repo web sites, I decided to package my custom script according to the rest of the JavaScript library.

    I therefore created another async method for the gh.repo prototype, as such:

       gh.repo.prototype.getLatestRelease = function(callback) {
          this.tags(function(result) {
             var latest = "";
             for (var prop in result.tags) {
                if (prop > latest) {
                   latest = prop;
                }
             }
             callback(latest);
          });
       }

    On each web site, I have a span with the id “version”.

    At the end of the github.js file, I finally added the following snippet:

       $(document).ready(function() {
          var repo = new gh.repo("danielsaidi", "Facadebook");
          var tags = repo.getLatestRelease(function(result){ $("#version").html(result); });
       });

    That is is! When the page loads, I load all available repository tags, iterate over them and grab the “highest” tag name (version number) available.

    The result is rather nice:

    Version number is now displayed within the download button

    Hope this helps!

     
  • danielsaidi 2:57 pm on October 5, 2011 Permalink | Reply
    Tags: , , , system architecture, task scheduler   

    Scheduling NDepend for a set of solutions 

    In a project of mine, I use NDepend to continuously analyze a set of solutions that make up some of the the software infrastructure of a major Swedish company.

    By scheduling the analyses to run once a week, using previous analyses as a baseline for comparison, I hope that this will make it easier to detect less favorable patterns that we want to avoid and pin-point good ones that we want to embrace.

    Although we use Team City as build server, I have setup the scheduled analyses to run from my personal computer during this first test phase. It is not optimal, but for now it will do.

    The analyses are triggered from a simple bat script, that does the following:

    • It first checks out each solution from TFS
    • It then builds each solution with devenv
    • It then run a pre-created NDepend analysis for each solution
    • Each analysis is configured to publish the HTML report to a web server that is available for everyone within the project
    Once I had created the script, I scheduled it using the Task Scheduler.  I set it to run every Monday morning at 8.30. Since it runs from my personal computer, I have to be early at work, but with two kids at home, I always am 🙂

    The scheduled script works like a charm. The analyses runs each week and everyone is happy (at least I am). Already after the first analysis, we noticed some areas that we could modify to drastically improve the architecture, reduce branch/merge hell, code duplication etc.

    Who knows what we will find after some incremental analyses? It is exciting, to say the least!

    One small tweak

    During the experimentation phase, when the report generation sometimes did not work, I was rather annoyed when NDepend did not run a new analysis, since no code had changed. The solution was simple – under Tools/Options/Anaysis, tell NDepend to always run a full analysis:

    In most cases, though, the default setting is correct, since it will run a full analysis at least once per day. However, in this case, I keep the “Always Run Full Analysis” selected for all NDepend projects.

    One final, small problem – help needed!

    A small problem that still is an issue, is that my NDepend projects sometimes begin complaining that the solution DLL:s are invalid…although they are not. The last time this happened (after the major architectural changes), it did not matter if I deleted and re-added the DLL:s – the project still considered them to be invalid. I had to create the delete the NDepend projects and re-create them from scratch to make them work.

    Has anyone had the same problem, and any idea what this could be about? Why do the NDepend projects start complaining about newly built DLL:s?

     
    • Patrick Smacchia 4:23 pm on October 5, 2011 Permalink | Reply

      One remark: the incremental analysis option, is only valid in the standalone or VS addin context, no in the build server context of running an analysis through NDepend.Console.exe

      Next time it tells you an assembly is invalid, take your machine and shake it (but please stay polite with it)!

      If it still doesn’t work, go in the NDepend project Properties panel > Code to analyze > invalid assemblies should appear with a red icon, hovering an invalid assembly with the mouse will show you a tooltip that explains the problem (the problem will also be shown in the info panel).

      My bet is that several different versions of an invalid assembly are present in the set of the ndproj project dirs, where NDepend searches for assemblies (hence NDepend doesn’t know which version to choose).

      • danielsaidi 8:19 am on October 6, 2011 Permalink | Reply

        Kudos for your fast response, Patrick!

        Since I run the analyses locally, the incremental analysis option will apply if I do not disable it. However, the fact that ND will still run a full analysis at least once a day, means that I could enable the option once again, after the initial setup phase.

        I had a looong look at the failing assemblies prior to writing this post. ND complained about that multiple versions of some assemblies existed, even after I confirmed that the specified paths in fact did not. After I recreated the NDepend project, and re-added the assemblies – everything worked once again.

        I will have a look at how ND handles the assemblies next Monday, and let you know. I have an SSD in my machine, so I’ll first try to give it a rather rough shake 🙂

        Other than that, I am looking forward to start modifying the CQL rules now. I love the comment in the “Instance fields should be prefixed with a ‘m_'” rule! 🙂

    • Patrick Smacchia 8:46 am on October 6, 2011 Permalink | Reply

      >I will have a look at how ND handles the assemblies next Monday, and let you know

      Ok, sounds good, let us know

      >I love the comment in the “Instance fields should be prefixed with a ‘m_’” rule!

      🙂

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel