Tagged: cyclomatic complexity Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 9:03 pm on January 20, 2012 Permalink | Reply
    Tags: afferent coupling, cvs, cyclomatic complexity, efferent coupling, , , , kinect, , , stack overflow, tim huckaby, windows 8   

    Øredev 2011 in the rear-view mirror – Part 5 

    Øredev logo

    This is the fifth part of my sum-up of Øredev 2011. Read more by following the links below:

    So, yeah…this sum-up was supposed to be a rather short thing, but has grown out of proportions. I will try to keep it short, but the sessions deserve to be mentioned.


    2:7 – Greg Young – How to get productive in a project in 24h

    In his second session at Øredev, Greg spoke about how to kick-start yourself in new projects. First of all, do you understand what they company does? Have you used your the product or service before? If you do not have an understanding of these fundamental facts, how will you deliver value?

    Then, Greg described how he quickly get up in the saddle. He usually start of with inspecting the CVS. Project that have been around for a while and still have tons of checkins, could possibly be suffering from a lot of bugs. Certain areas that are under the burden of massive amounts of checkins could possibly be bug hives.

    Note these things, it could quickly tell you where in the project it hurts. Naturally, a lot of checkins do not automatically indicate bugs or problems. The team could just be developing stuff. However, it will give you someplace to start. At least, a lot of checkins do mean that someone is working in that particular part of the project.

    These are very simple steps to take, but to actually be able to discuss the project after just an hour or so with the CVS, and maybe even pin-pointing some problems, will give your customer the impression that you are almost clairvoyant…or at least that you know what you are doing, which is why they pay you.

    If the project is not using continuous integration, at least set it up locally. It does not take long and will help you out tremendously. To be able to yell out to someone who breaks the build the second they do it…well, it will give you pleasure at least.

    Greg then went on to demonstrate how you can dig even deeper, and his tool of the day was NDepend. Greg’s demo was awesome, and Patrick should consider giving him…well, a hug at least. I, who without that much success demonstrated NDepend in my organization a while back, could quickly tell that I have a long way to go in how you present a tool to people.

    With NDepend, Greg demonstrated how to use the various metrics, like cyclomatic complexity and afferent/efferent coupling. He went through the various graphs, describing what they do and how they can be used and told us to specially look out for black squares in the dependency matrix (they indicate circular references) and concrete couplings (they should be broken up).

    All in all a very, very great session that also gave me a lot of things to aim for when holding presentations myself. As a consultant, you should not miss this video.


    3:1 – Keynote – Jeff Atwood – Stack Overflow: Social Software for the Anti-Social Part II: Electric Boogaloo

    I will not attempt to cover everything said in this keynote. Instead, you should go here and wait for the video. It is filled with fun gems, like when Jeff describes how stuff that are accepted in a web context would be really strange if applied in real life. For instance, FB lets you keep a list of friends. Who has a physical list of friends IRL?

    Anyway, Jeff spoke about gamification and how we can design our service like a game, using a set of rules to define how it is meant to be used, award those who adapt to the rules…and punish the ones that do not. The basic premise is that games have rules and games are fun…so if we design our web sites as games, they should become fun as well.

    Well, at least, rules drastically simplifies how we are supposed to behave. It tells us what to do. Sure, it does not work for all kinds of sites, but for social software, gamification should be considered. Games, in general make social interaction non-scary, since everyone has to conform to the rules. Just look at the world, and you will know that this is true.

    So, when designing Stack Overflow, Jeff and Joel did so with gamification in mind. You may not notice it at first, but everything there is carefully considered. For instance, people use to complain that you cannot add a new question at the very start page. This is intentional. Before you add a question, Stack wants you to read other questions, see how people interact and learn the rules.

    Stack adapt several concepts from the gaming world. Good players are awarded with achievements and level up as they progress. There are tutorials, unlockables etc. Without first realizing it, Jeff and Joel ended up creating a Q&A game that consists of several layers:

    • The game – ask and answer questions
    • The meta-game – receive badges, level up, become an administrator etc.
    • The end-game – make the Internet a little better

    This design makes it possible for Stack Overflow to allow anonymous users, unlike Facebook who decided to only allow real names in order to filter out the “idiots”. Since Stack Overflow award good players, bad players are automatically sorted out. The community is self-sanitizing. People are awarded with admin status if they play good enough. It’s just like Counter Strike, where you are forced to be a team player. If you are not, the game will kill you 🙂

    I could go on and on, but Jeff says it best himself. Although some parts are simply shameless Stack commercial, I recommend you to check out the video.


    3:2 – Tim Huckaby – Building HTML5 Applications with Visual Studio 11 for Windows 8

    Tim has worked with (not at) Microsoft for a loooong time and is one charismatic guy, I must say. What I really appreciated with his session was that it seemed a bit improvised, unlike most sessions at Øredev. What I did not like quite as much, though, was that it seemed too improvised. Due to lack of time and hardware issues, Tim failed to demonstrate what I came to see – HTML5 applications with VS11.

    Tim begun with stating that he hates HTML…but that he loves HTML5, which is “crossing the chasm”. This means that it is a safe technology to bet on, because it will be adapted. How do we know? Well, the graph below illustrates when a technology is “crossing the chasm” in relation to how people adapt it:

    The Chasm Graph :)
    So when a technology is “crossing the chasm”, get to work – it will be used 🙂 I wonder how the graph would have looked for HD-DVD? Tim also thanked Apple for inventing the iPad (which he calls a $x couch computer). Thanks to the iPhone and the iPad, Flash and plugins are out and HTML5 is in.

    Large parts of the sessions were fun anecdotes, like when he spoke about how Adobe went out with a “we <heart> Apple” campaign and Apple responded with an “we <missing plugin> Adobe”. Hilarious, but did we learn anything from these anectodes? Well, time will tell.

    • Tim went through some browser statistics, explained why IE6 is still so widely used (damn those piracy copies of Win XP in China)…and ended up with some small demos, but faced massive hardware problems and promised us some more meat if we stayed a while. I stayed a while (I even attended the next Tim session) but the demos were not that wow.

    So, how did Tim do in his second session? Read on!


    3:3 – Tim Huckaby – Delivering Improved User Experience with Metro Style Win 8 Applications

    Tim started this session talking about NUI – Natural User Interfaces and some new features of Windows 8, like semantic zoom, a desktop mode behind Metro (it looks great, just like Win 7!), smart touch and…a new task manager (he was kinda ironic here).

    Tim demonstrated Tobii on a really cool laptop with two cameras, which allow it to see in 3D.  The rest of the session was…enjoyable. I cannot put my finger on it, but I had fun, although I was disappointed at what was demonstrated. The Kinect demo was semi-cool, a great Swedish screen was also interesting and Tim also hinted about how the new XBOX Loop and a new Kinect will become a small revolution.

    I really do not know what to say about this. Watch the video. You will have fun.

  • danielsaidi 2:57 pm on October 5, 2011 Permalink | Reply
    Tags: , cyclomatic complexity, , system architecture, task scheduler   

    Scheduling NDepend for a set of solutions 

    In a project of mine, I use NDepend to continuously analyze a set of solutions that make up some of the the software infrastructure of a major Swedish company.

    By scheduling the analyses to run once a week, using previous analyses as a baseline for comparison, I hope that this will make it easier to detect less favorable patterns that we want to avoid and pin-point good ones that we want to embrace.

    Although we use Team City as build server, I have setup the scheduled analyses to run from my personal computer during this first test phase. It is not optimal, but for now it will do.

    The analyses are triggered from a simple bat script, that does the following:

    • It first checks out each solution from TFS
    • It then builds each solution with devenv
    • It then run a pre-created NDepend analysis for each solution
    • Each analysis is configured to publish the HTML report to a web server that is available for everyone within the project
    Once I had created the script, I scheduled it using the Task Scheduler.  I set it to run every Monday morning at 8.30. Since it runs from my personal computer, I have to be early at work, but with two kids at home, I always am 🙂

    The scheduled script works like a charm. The analyses runs each week and everyone is happy (at least I am). Already after the first analysis, we noticed some areas that we could modify to drastically improve the architecture, reduce branch/merge hell, code duplication etc.

    Who knows what we will find after some incremental analyses? It is exciting, to say the least!

    One small tweak

    During the experimentation phase, when the report generation sometimes did not work, I was rather annoyed when NDepend did not run a new analysis, since no code had changed. The solution was simple – under Tools/Options/Anaysis, tell NDepend to always run a full analysis:

    In most cases, though, the default setting is correct, since it will run a full analysis at least once per day. However, in this case, I keep the “Always Run Full Analysis” selected for all NDepend projects.

    One final, small problem – help needed!

    A small problem that still is an issue, is that my NDepend projects sometimes begin complaining that the solution DLL:s are invalid…although they are not. The last time this happened (after the major architectural changes), it did not matter if I deleted and re-added the DLL:s – the project still considered them to be invalid. I had to create the delete the NDepend projects and re-create them from scratch to make them work.

    Has anyone had the same problem, and any idea what this could be about? Why do the NDepend projects start complaining about newly built DLL:s?

    • Patrick Smacchia 4:23 pm on October 5, 2011 Permalink | Reply

      One remark: the incremental analysis option, is only valid in the standalone or VS addin context, no in the build server context of running an analysis through NDepend.Console.exe

      Next time it tells you an assembly is invalid, take your machine and shake it (but please stay polite with it)!

      If it still doesn’t work, go in the NDepend project Properties panel > Code to analyze > invalid assemblies should appear with a red icon, hovering an invalid assembly with the mouse will show you a tooltip that explains the problem (the problem will also be shown in the info panel).

      My bet is that several different versions of an invalid assembly are present in the set of the ndproj project dirs, where NDepend searches for assemblies (hence NDepend doesn’t know which version to choose).

      • danielsaidi 8:19 am on October 6, 2011 Permalink | Reply

        Kudos for your fast response, Patrick!

        Since I run the analyses locally, the incremental analysis option will apply if I do not disable it. However, the fact that ND will still run a full analysis at least once a day, means that I could enable the option once again, after the initial setup phase.

        I had a looong look at the failing assemblies prior to writing this post. ND complained about that multiple versions of some assemblies existed, even after I confirmed that the specified paths in fact did not. After I recreated the NDepend project, and re-added the assemblies – everything worked once again.

        I will have a look at how ND handles the assemblies next Monday, and let you know. I have an SSD in my machine, so I’ll first try to give it a rather rough shake 🙂

        Other than that, I am looking forward to start modifying the CQL rules now. I love the comment in the “Instance fields should be prefixed with a ‘m_'” rule! 🙂

    • Patrick Smacchia 8:46 am on October 6, 2011 Permalink | Reply

      >I will have a look at how ND handles the assemblies next Monday, and let you know

      Ok, sounds good, let us know

      >I love the comment in the “Instance fields should be prefixed with a ‘m_’” rule!


  • danielsaidi 8:45 pm on October 7, 2010 Permalink | Reply
    Tags: assembly metrics, constraints, cyclomatic complexity, instability, metrics, , review, type metrics   

    Getting started with NDepend 

    So, after quite some time, I have finally got my thumb out and added an NDepend project to my new .NET Extensions 2.0 solution, in order to get some analyzing done before releasing it.

    The first thing that hit me was how easy it was to attach a new NDepend project to my solution. I just had to:

    1. Install Visual NDepend
    2. Open my .NET Extensions solution
    3. Under the new “NDepend” main menu item, select “Attach new NDepend project to solution”
    4. Select all projects of interested (I chose all of them)
    5. Press the OK button and pray to god.

    Once NDepend is attached to your solution, the menu will change and look like this:

    The NDepend menu

    The NDepend menu

    …but before that, it will perform a first-time analysis of all projects that it is set to handle. This is done automatically, so just bind NDepend to your solution, and it will perform the analysis…

    …after which Firefox (or, naturally, your default browser of choice) will come to life and display a complete analysis summary, which is generated in a folder called NDependOut:

    The generated NDependOut folder

    The generated NDependOut folder


    So, what the report has to say?

    Report sections

    The report is divided into certain sections, of which more (to me) are more interesing than others.

    The various sections of the NDepend report

    The various sections of the NDepend report


    Application metrics

    Well…first of all, a complete textual summary of application metrics is presented:

    The NDepend Application Metrics summary

    The NDepend Application Metrics summary


    Now, this summary contains a couple of interesting metrics.

    Note, for instance, the comment ratio (51%). I have always taken pride in commenting my code, but lately, I have focused on writing readable code instead 🙂  However, I have decided to overdo the commenting in this project, since it must be understandable for users that only get their hands on the DLL.

    Since .NET Extensions is mainly an extension project, I think that this summary is quite what I expected…even if I maybe could do with a bit more interfaces. Note that not much is going un “under the hood” – almost everything is public (in some cases for unit test purposes, which I will change #beginner-error).

    Also, since I am of the belief that one should NEVER work directly towards an object’s fields, I am happy that I have no public fields at all  🙂

    The last row (cut of) displays the method/function with the worst cyclomatic complexity:

    The worst CC in the solution

    The worst CC in the solution

    However, when I analyze the method with the Visual Studio analyzer, it says that the method has a CC of 13! Turns out that 25 is the ILCC (Intermediate Language Code Complexity) – the resulting CC. However, I will write a new blog post later on, in which I’ll use this information to improve the GetHtml() method. Stay tuned 🙂


    Assembly metrics + abstraction/stability summary

    After the interesting application metrics summary come some assembly metrics (also quite interesting) as well as information about for the stability of the different assemblies.

    First of all, everything is presented a textual grid:

    The NDepend Assembly metrics table

    The NDepend Assembly metrics table

    This information is then displayed in various graphical components, such as the Visual NDepend view (in Visual Studio, you can use NDepend to navigate through this view)…

    The Visual NDepend View

    The Visual NDepend View

    …as well as the Abstractness vs. Instability view…

    The Abstractness vs. Instability view

    The Abstractness vs. Instability view


    Now, let’s stop for a moment and discuss this graph. The word “instability” first made me feel like I had written the worst piece of junk there is, but I think that the word is quite misleading.

    As I’ve mentioned, the analyzed solution consists of a lot of extension and helper classes, which almost never are independent – they mostly depend on other base classes, since that is their purpose. If I have understood the term “instability” correctly, this is exactly what it means. The solution is instable since it depends on a lot of other components…

    …but for this kind of solution, it is hard to have it any other way. After reflecting over the graph a bit, and enjoying the green color (except for the so far empty build project), I understood what view intends to display.


    Dependencies, build order etc.

    This part of the report is probably a lot more interesting if you intend to delve into a solution of which development you have not been a part of earlier on.

    However, for this particular situation, this part of the report really did not give me anything that I did not already know.


    Amazing final part – constraints

    Finally, NDepend displayes an amazing part where the code is evaluated according to all existing constraints.

    For instance…


    One of the vast number of constraint summaries

    One of the vast number of constraint summaries


    …this part displays a constraint that selects all functions that:

    • Has more than 30 lines of code OR
    • Has more than 200 IL instructions OR
    • Has a cyclomatic complexity over 20 OR
    • Has an IL cyclomatic complexity over 50 OR
    • Has an IL nesting depth that is larger than 4 OR
    • Has more than 5 parameters OR
    • Has more than 8 variables OR
    • Has more than 6 overloads

    For instance, the first item in the list – Split() – is there because it has more than8 variables.

    Some of the default constraints are perhaps a bit harse, but most of them are really useful. Just having a look at these constraints and at how your code applies to them gives you a deeper understanding of how you (or your team) writes code.


    Type metrics

    Finally, comes an exhausting, thorough grid with ALL the information you can imagine about every single type in the solution.

    Type metrics

    Type metrics

    The “worst” cells in each category are highlighted, which makes it really easy to get an overview of the entire framework…although the information is quite massive.



    Well, only having connected an NDepend project to my solution, I have barely scratched the surface of what NDepend can offer. To be able to extract this much info by just pressing a button, is quite impressive.

    I wrote this blog post yesterday, and have rewritten large parts of it today. During that time span, my stance towards it has shifted a bit.

    Yesterday, since I then did not understand parts of the report, I was under the impression that your own hobby project is not the best context in which to use NDepend…and that it comes to better use when working in a role (e.g. tech lead / lead developer) that require you be able to quickly extract data about the systems for which you are responsible. In such a context, NDepend is greaaat.

    However, after getting some time to “feel” how NDepend “feels” for me as a developer, I have started to see benefits even for a solution such as this extensions solution. As I will show in a future blog post, I can use the information I extract from NDepend to detect the parts of my framework that are “worst”…and makes it easy to adjust them, re-analyze them and see how my implementation grows better.

    It is a bit like comparing my iPhone with my iPad. ReSharper was like my phone – as soon as I started using it, I could not live without it. NDepend, on the other hand, is much like the iPad. At first, I really could not see the use, but after som time, it finds right into your day-to-day life and…after a while…becomes natural.

    I will scratch myself further down into NDepend. Stay tunes.

    • danielsaidi 9:11 pm on October 7, 2010 Permalink | Reply

      As you can see, this theme sucks for wide content…and code. Any advices??

    • Mattias 10:31 pm on October 7, 2010 Permalink | Reply

      The mobile theme on my iPhone doesn’t make it better. 😉

    • Mattias 6:59 am on October 8, 2010 Permalink | Reply

      Have you tried google.com? 😉

    • danielsaidi 8:06 am on October 8, 2010 Permalink | Reply

      Nooooo, but I often get letmegooglethatforyou.com links from my friends and collegoues 😉 Hmmm…maybe it’s worth paying those dollars to WordPress to unlock the custom CSS feature. 😛

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc