Updates from February, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 1:03 pm on February 22, 2012 Permalink | Reply
    Tags: assembly version, boo, , nextra, , nuget package explorer, phantom,   

    Use Phantom/Boo to automatically build, test, analyze and publish to NuGet and GitHub 

    When developing my NExtra .NET library hobby project, I used to handle the release process manually. Since a release involved executing unit tests, bundling all files, zipping and uploading the bundle to GitHub, creating new git tags etc. the process was quite time-consuming and error-prone.

    But things did not end there. After adding NExtra to NuGet, every release also involved refreshing and publishing six NuGet packages. Since I used the NuGet Package Explorer, I had to refresh the file and dependency specs for each package. It took time, and the error risk was quite high.

    Since releasing new versions involved so many steps, I used to release NExtra quite seldom.

    Laziness was killing it.

    The solution

    I realized that something had to be done. Unlike at work, where we use TeamCity for all solutions, I found a build server to be a bit overkill. However, maybe I could use a build script for automating the build and release process?

    So with this conclusion, I defined what the script must be able to help me out with:

    • Build and test all projects in the solution
    • Automatically extract the resulting version
    • Create a release folder or zip with all files
    • Create a new release tag and push it to GitHub
    • Create a NuGet package for each project and publish to NuGet

    The only piece of the release process not covered by this process was to upload the release zip to GitHub, but that would be a walk in the park once the build script generated a release zip.

    The biggest step was not developing the build script. In fact, it is quite a simple creation. Instead, the biggest step was to come to the conclusion that I needed one.

    Selecting a build system

    In order to handle my release process, I needed a build system. I decided to go with Phantom, since I use it at work as well. It is a convenient tool (although a new, official version would be nice) that works well, but it left me with an annoying problem, which I will describe further down.

    So, I simply added Phantom 0.3 to a sub folder under the solution root. No config is needed – the build.bat and build.boo (read on) files take care of everything.

    The build.bat file

    build.bat is the file that I use to trigger a build, build a .zip or perform a full publish from the command prompt. I placed it in
    the solution root, and it looks like this.

    @echo off
    :: Change to the directory that this batch file is in
    for /f %%i in ("%0") do set curpath=%%~dpi
    cd /d %curpath%
    :: Fetch input parameters
    set target=%1
    set config=%2
    :: Set default target and config if needed
    if "%target%"=="" set target=default
    if "%config%"=="" set config=release
    :: Execute the boo script with input params - accessible with env("x")
    resources\phantom\phantom.exe -f:build.boo %target% -a:config=%config%


    Those of you who read Joel Abrahamsson’s blog, probably recognize the first part. It will move to the folder that contains the .bat file, so that everything is launched from there.

    The second section fetches any input parameters. The target param determines the operation to launch (build, deploy, zip or publish) and config what kind of build config to use (debug, release etc.)

    The third section handles param fallback in case I did not define some of the input parameters. This means that if I only provide a target, config will fall back to “release”. If I define no params at all, target will fall back to “default”.

    Finally, the bat file calls phantom.exe, using the build.boo file. It tells build.boo to launch the provided “target” and also sends “config” as an environment variable (the -a:config part).

    All in all, the build.bat file is really simple. It sets a target and config and uses the values to trigger the build script.

    The build.boo file

    The build.boo build script file is a lot bigger than the .bat file. It is also located in the solution root and looks like this:

    import System.IO
    project_name = "NExtra"
    assembly_file = "SharedAssemblyInfo.cs"
    build_folder = "_tmpbuild_/"
    build_version = ""
    build_config = env('config')
    test_assemblies = (
    target default, (compile, test):
    target zip, (compile, test, copy):
     zip("${build_folder}", "${project_name}.${build_version}.zip")
    target deploy, (compile, test, copy):
     with FileList(build_folder):
     .ForEach def(file):
    target publish, (zip, publish_nuget, publish_github):
    target compile:
     msbuild(file: "${project_name}.sln", configuration: build_config, version: "4")
     //Probably a really crappy way to retrieve assembly
     //version, but I cannot use System.Reflection since
     //Phantom is old and if I recompile Phantom it does
     //not work. Also, since Phantom is old, it does not
     //find my plugin that can get new assembly versions.
     content = File.ReadAllText("${assembly_file}")
     start_index = content.IndexOf("AssemblyVersion(") + 17
     content = content.Substring(start_index)
     end_index = content.IndexOf("\"")
     build_version = content.Substring(0, end_index)
    target test:
     nunit(assemblies: test_assemblies, enableTeamCity: true, toolPath: "resources/phantom/lib/nunit/nunit-console.exe", teamCityArgs: "v4.0 x86 NUnit-2.5.5")
     exec("del TestResult.xml")
    target copy:
     File.Copy("README.md", "${build_folder}/README.txt", true)
     File.Copy("Release-notes.md", "${build_folder}/Release-notes.txt", true)
     with FileList(""):
     .ForEach def(file):
     File.Copy(file.FullName, "${build_folder}/${file.Name}", true)
    target publish_nuget:
     File.Copy("README.md", "Resources\\README.txt", true)
     File.Copy("Release-notes.md", "Resources\\Release-notes.txt", true)
     exec("nuget" , "pack ${project_name}\\${project_name}.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.web\\${project_name}.web.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.mvc\\${project_name}.mvc.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.wpf\\${project_name}.wpf.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.webforms\\${project_name}.webforms.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.winforms\\${project_name}.winforms.csproj -prop configuration=release")
     exec("nuget push ${project_name}.${build_version}.nupkg")
     exec("nuget push ${project_name}.web.${build_version}.nupkg")
     exec("nuget push ${project_name}.mvc.${build_version}.nupkg")
     exec("nuget push ${project_name}.wpf.${build_version}.nupkg")
     exec("nuget push ${project_name}.webforms.${build_version}.nupkg")
     exec("nuget push ${project_name}.winforms.${build_version}.nupkg")
     exec("del *.nupkg")
     exec("del Resources\\README.txt")
     exec("del Resources\\Release-notes.txt")
    target publish_github:
     exec("git add .")
     exec('git commit . -m "Publishing ${project_name} ' + "${build_version}" + '"')
     exec("git tag ${build_version}")
     exec("git push origin master")
     exec("git push origin ${build_version}")

    Topmost, we see a system import. This will allow us to use System.IO for file operations. After that, I define some variables and a list of test assemblies that I want to test.

    Two variables worth mentioning is the build_version, which is set in the compile step, as well as build_config, which is set by the input parameter defined in build.bat.

    The next section of the file defines all public targets, that are intended to be callable by the user. These map directly to target in build.bat.

    Of course, all targets further down can be called as well – there are no such thing as public or private targets. Still, that would probably not be a very good idea.

    If we look at the public targets, we have:

    • default – Executes “compile” and “test”
    • zip – Executes “compile” and “test”, then creates a zip file
    • deploy – Executes “compile” and “test” then creates a folder
    • publish – Executes “zip”, then publishes to NuGet and GitHub

    If we look at the private targets (that do the real work) we have:

    • compile – Compiles the solution and extract the version number
    • test – Runs the NUnit builtin with the .NExtra test assemblies
    • copy – Copies all relevant files to the temporary build_folder
    • publish_nuget – Pack and publish each .NExtra project to NuGet
    • publish_github – Commit all changes, create a tag then push it

    It is not that complicated, but it is rather much. You could take the bat and boo file and tweak it, and it would probably work for your projects as well.

    However, read on for some hacks that I had to do to get the build process working as smoothly as it does.

    One assembly file to rule them all

    A while ago, I decided to extract common information from each of the .NExtra projects into a shared assembly file.

    The shared assembly file looks like this:

    using System.Reflection;
    // General Information about an assembly is controlled through the following
    // set of attributes. Change these attribute values to modify the information
    // associated with an assembly.
    [assembly: AssemblyCompany("Daniel Saidi")]
    [assembly: AssemblyProduct("NExtra")]
    [assembly: AssemblyCopyright("Copyright © Daniel Saidi 2009-2012")]
    [assembly: AssemblyTrademark("")]
    // Make it easy to distinguish Debug and Release (i.e. Retail) builds;
    // for example, through the file properties window.
    #if DEBUG
    [assembly: AssemblyConfiguration("Debug")]
    [assembly: AssemblyConfiguration("Retail")]
    // Version information for an assembly consists of the following four values:
    // Major Version
    // Minor Version
    // Build Number
    // Revision
    // You can specify all the values or you can default the Build and Revision Numbers
    // by using the '*' as shown below:
    [assembly: AssemblyVersion("")]
    [assembly: AssemblyFileVersion("")]

    The file defines shared assembly information like version, to let me specify this once for all projects. I link this file into each project and then remove the information from the project specific assembly info file.

    Since the .NExtra version management is a manual process (the way I want it to be), I manage the .NExtra version here and parse the file during the build process to retrieve the version number. The best way would be to use System.Reflection to analyze the library files, but this does not work, since Phantom uses .NET 3.5.

    I tried re-compiling Phantom to solve this, but then other things started to crash. So…the file parse approach is ugly, but works.

    Tweaking NuGet

    After installing NuGet, typing “nuget” in the command prompt will still cause a warning message to appear, since “nuget” is unknown.

    To solve this, either add the NuGet executable path to PATH or be lazy and use the nuget.exe command line bootstrapper, which finds NuGet for you. You can download it from CodePlex or grab it from the .NExtra Resources root folder.

    Regarding each project’s nuspec file, they were easily created by calling “nuget spec x” where x is the path to the project file. A nuspec file is then generated. I then added some information that cannot be extracted from the assembly, like project URL, icon etc. for each of these generated spec files.


    This post became a rather long, but I hope that it did explain my way of handling the .NExtra release process.

    Using the build script, I can now call build.bat in the following ways:

    • build – build and test the solution
    • build zip – build and test the solution and generate a nextra.<version>.zip file
    • build deploy – build and test the solution and generate a nextra.<version>.zip folder
    • build publish – the same as build zip, but also publishes to NuGet and GitHub.

    The build script has saved me immense amount of work. It saves me time, increases quality by reducing the amount of manual work and makes releasing new versions of .NExtra a breeze.

    I still have to upload the zip to the GitHub download area, but I find this to be a minimum task compared to all other steps. Maybe I’ll automate this one day as well, but it will do for now.

    I strongly recommend all projects to use a build script, even for small projects where a build server is a bit overkill. Automating the release process is a ticket to heaven.

    Or very close to that.

    • Markus Johansson 10:07 am on July 16, 2013 Permalink | Reply

      Great post! Looks as a really nice release script! Thanks for sharing your experiance!

      • danielsaidi 8:33 am on July 20, 2013 Permalink | Reply

        Thanks! 🙂 Once all pieces are in place, publishing new releases is a breeze.

  • danielsaidi 5:31 pm on January 18, 2012 Permalink | Reply
    Tags: microsoft commerce server   

    Microsoft Commerce Server, anyone? 

    I am currently working on an e-commerce solution that is based on Microsoft Commerce Server 2007 SP2. Without prior experience of MSCS, and without being the one setting up the solution, I am at a loss regarding some issues that we are trying to solve.

    Anonymous baskets

    A big problem for us, and a strange one to solve, is that 200.000 anonymous baskets are automatically generated every night! This occurs in all environments – locally, at the test and stage servers as well as in production. The basket creation occurs at the same time every night, which (duh) indicates a scheduled event of some kind.

    My developers have not been able to track down what is causing this. Instead, they have created a scheduled task that empties anonymous baskets that are not used. So, we have not fixed the problem, we are just cleaning up the mess.

    These auto-generated baskets did cause the MSCS database to grow to insane levels. Our scheduled task have brought it back to normal, but the dream scenario would naturally be to be able to track down what is happening and simply solve the problem. Since it happens locally, we can exclude any of the import jobs that run continuously, as well as any externally exposed web services.

    Has anyone experienced this behavior with MSCS 2007 before? If so, I would appreciate a push in the right direction.

    Slow data operations

    Our load tests show that the site has become a bit slower since the new version was launched in May. Sure, the devs have added a lot of new functionality, but when I analyze the data operations that take the longest time to execute, it turns out that MSCS is the real bottleneck. Profile creation can take up large parts of the execution time when a view is built, and product sub categories are really slow to load.

    For a system like MSCS, is it really realistic to assume that the database has become that much slower in just six months. The MSCS database has not undergone any optimizations during this time, but should it really be necessary? We are bringing in a SQL optimizer, but if anyone has experienced that MSCS slows down due to bad indices or so, I’d love to hear more about it.

    • Ben Taylor 1:14 pm on January 23, 2012 Permalink | Reply

      I would wager that you create an anonymous basket each time you get a new visitor. You probably then store something in a cookie and pull the anon basket out each time they return. Problem is, this fails when you are hit 200,000 by a web crawler that does not support cookies 🙂

      If you’ve not been working on your SQL housekeeping and tuning, then I’m sure that will be part of the slowdown issue. You may also be using more expensive API calls. I would suggest you profile the site using a good profiler. A good caching strategy is also a winner. However, caching CS objects is memory intensive. You may want to just cache the bits of data you need for the page.

      • danielsaidi 4:12 pm on January 23, 2012 Permalink | Reply

        Ben, thank you SO much for your comment! I believe that you pin-pointed the problem with the anonymous baskets and gave the developers a kick in the right direction.

        We have not confirmed it yet, but we do have an external search service that crawls through the site every night. When the developers read your response, they immediately started investigating whether or not that service is what could be causing the problem. We will know more tomorrow 🙂

        Also, big thanks for your other advices. We will allocate resources for optimizing the databases, which have been cluttered with anonymous baskets (and cleaned up continously) for over half a year. I think that this will make the databases a bit faster.

    • Ben Taylor 3:40 pm on January 24, 2012 Permalink | Reply

      Glad to have (hopefully) been of assistance.

      If you guys ever need an awesome promotion engine for Commerce Server check us out http://www.enticify.com/

      Good luck!

    • ikilic 9:09 am on May 27, 2013 Permalink | Reply

      We are having problems with Microsoft Commerce Server 2009 freetextsearch, the proplem is searching for a single character ,
      For example we can search for iphone 4S but not for iphone 4, we are using AND clause

      Hope you can help us.

      • danielsaidi 8:28 pm on May 28, 2013 Permalink | Reply

        Hi, I sadly cannot help you guys with this, since I first of all have not had that particular problem and also have not worked with MS Commerce Server for a looong time. Best of luck, though!

  • danielsaidi 4:12 pm on December 8, 2011 Permalink | Reply
    Tags: circular namespace dependencies, , implementation, ,   

    NDepend getting my juices flowing 

    I have been using NDepend to analyse the latest version of .NExtra. The code is spotless (this is where you should detect the irony), but the analysis is highlighting some really interesting design flaws that I will probably fix in the next major version.

    First of all, almost all cases where method or type names are too long are where I have created facades or abstractions for native .NET entities, like Membership. Here, I simply add exceptions where NDepend will not raise any warnings, since the whole point with the facades are that they should mirror the native classes, to simplify switching them out with your own implementations, mocks etc.

    Second, NDepend warns for naming conventions that I do not agree with, such as that method names should start with m_. Here, I simply remove the rules, since I do not find them valid. However, when I look at it after removing the rules, I could have kept them and renamed them and make them check that my own conventions are followed. I will probably do so later on.

    Third, and the thing I learned the most from today, was that I turned out to have circular namespace dependencies, despite that I have put a lot of effort into avoiding circular namespace and assembly dependencies. The cause of the circular dependencies turned out to be between X and X/Abstractions namespaces.

    Circular dependency graph

    The base and abstraction namespaces depend on eachother

    For instance, have a look at NExtra.Geo, which contains geo location utility classes. The namespace contains entities like Position and enums like DistanceUnits, as well as implementations of the interfaces in the Abstractions sub namespace.

    Now, what happens is that the interfaces define methods that use and return types and entities in the base namespace. At the same time, the implementations in the base namespace implement the interfaces in the Abstractions namespace. And there you go, circular namespace dependencies.

    Now, in this particular case, I do not think that it is too much of a deal, but it highlights something that has annoyed me a bit while working with the .NExtra library.

    In the library’s unit tests, the test classes only knows about the interfaces, but the setup method either selects a concrete implementation or a mock for the various interfaces. For an example, look here. This force me to refer both the base namespace as well as the abstractions namespace. Once again, this is not that bad, but it raises the question…

    In the next minor version of .NExtra, I will probably get rid of the abstraction namespaces, since they add nothing but complexity. Sure, they tuck away all interfaces, but why should they?

    • Daniel Lee 6:40 pm on December 8, 2011 Permalink | Reply

      Were you inspired by Greg Young’s session at Öredev? I have to get into NDepend as well. Did you buy a license or you testing out the trial version?

    • danielsaidi 9:30 am on December 9, 2011 Permalink | Reply

      I have been using NDepend for a while, but yeah, Greg’s session opened up my eyes for other ways of looking at the metrics. And even if I remove some metrics (like methods should start with m_, static members with s_ etc.) it is really striking how some of the metrics really highlight design flaws that are easily corrected.

  • danielsaidi 9:14 am on October 18, 2011 Permalink | Reply
    Tags: decorator pattern, , ,   

    Am I writing bad tests? 

    To grow as a developer, there is nothing as good as opening up your guard and invite others to criticize your potential flaws…as well as reading a book every now and then.

    In real life, you have to be a pragmatic programmer (provided that you are, in fact, a programmer) and do what is “best” for the project, even if that means getting a project released instead of developing it to the point of perfection.

    In hobby projects, however, I more than often find myself reaching for this very perfection.

    (Note: My earlier attempts to reach perfection involved having separate, standardized regions for variables, properties, methods, constructors etc. as well as writing comments for all public members of every class. I was ruthlessly beaten out of this bad behavior by http://twitter.com/#!/nahojd – who has my eternal gratitude)

    Now, I suspect that I have become trapped in another bad behavior – the unit test everything trap. At its worst, I may not even be writing unit tests, so I invite all readers to comment on if I am out on a bad streak here.

    The standard setup

    In the standard setup, I have:

    • IGroupInviteService – the interface for the service has several methods, e.g. AcceptInvite and CreateInvite.
    • GroupInviteService – a standard implementation of IGroupInviteService that handles the core processes, with no extra addons.
    • GroupInviteServiceBehavior – a test class that tests every little part of the standard implementation.

    This setup works great. It is the extended e-mail setup below that leaves me a bit suspicious.

    The extended e-mail setup

    In the extended e-mail sending setup, I have:

    • EmailSendingGroupInviteService – facades any IGroupInviteService and sends out an e-mail when an invite is created.
    • EmailSendingGroupInviteServiceBehavior – a test class that…well, that is the problem.

    The EmailSendingGroupInviteService class

    Before moving on, let’s take a look at how the EmailSendingGroupInviteService class is implemented.

    Code for parts of the e-mail sending implementation.

    As you can see, the e-mail sending part is not yet developed 😉

    As you also can see, the methods only call the base instance. Now, let’s look at some tests.

    The EmailSendingGroupInviteServiceBehavior class

    Let’s take a look at some of the EmailSendingGroupInviteServiceBehavior tests.

    Image showin parts of the test class

    As you can see, all that I can test is that the base instance is called properly, and that the base instance result is returned.


    Testing the decorator class like this is really time-consuming, and for each new method I add, I have to write more of these tests for each decorator class. That could become a lot of useless tests.

    Well, the tests are not useless…they are just…well…I just hate having to write them 🙂

    So, this raises my final question:

    • Would it not be better to only test the stuff that differ? In this case, only keep CreateInvite_ShouldSendEmail
    Let me know what you think
    • Daniel Lee 10:52 am on October 18, 2011 Permalink | Reply

      I am going through a similar thought process. I would say that those tests that only test that the base instance was called are not worth it. There is no new logic being tested at all. It would be different if you changed the parameters and then called the base instance. If you already have tests on your base instance then that is good enough IMO.

      But I can totally understand where you are coming from. When practicing TDD it feels wrong to write a bunch of methods without writing tests for them. Maybe more coarse-grained tests in the layer above the email service class would solve this?

      This is really a .NET thing, if this was Ruby code then you’d just do this as a mixin. It’s only ceremony and not really providing any value. But I don’t know of a way to avoid this in .NET unfortunately.

      • danielsaidi 11:03 am on October 18, 2011 Permalink | Reply

        I totally agree with you…and also think it depends on the situation. In this case, where I work alone and my classes are rather clean, or when you work with people that share coding principles, then I agree that only testing the altered behavior is sufficient. However, if so is not the case, then perhaps thorough tests like these are valuable…

        …which maybe signals that you have greater problems than worrying over code coverage 🙂

        Thanks for your thoughts…and for a fast reply!

    • Daniel Lee 11:41 am on October 18, 2011 Permalink | Reply

      It’s a judgement thing (the classic ‘it depends’ answer). If you think that those forwarding methods are probably never going to change then those tests are mostly just extra baggage. But if you are pretty sure this is only step 1 then they could be valuable in the future.

      But if you have more coarse-grained tests (tests that test more than one layer) above these tests then they should break if someone changes the code here. For code this simple you don’t have to take such small steps with the tests. What do you think?

    • Daniel Persson 12:47 pm on October 18, 2011 Permalink | Reply

      If you really would like to test it, i would say only do one test – which is the setup and assert (one assert) on the expected result. No need for verify since it is verified in the setup to assert test.

      And whether you should do the tests or not, I agree with Daniel Lee. It’s probably not worth it, unless good reason. The behavior that changes from the base is the one that should be primary tested. If you overspecify, the tests will be harder to maintain and the solution it self be harder/more time consuming to change.

      • danielsaidi 1:00 pm on October 18, 2011 Permalink | Reply

        …which is exactly the situation I faced this Friday, when I ended up spending the entire afternoon adjusting one test class…which on the other hand probably had to do more with me writing bad tests 😉

    • Petter Wigle 8:11 pm on October 19, 2011 Permalink | Reply

      I think the tests tell you that the design could be improved.
      In this case I think it would have been better if the EmailSendingGroupInviteService would inherit from GroupInviteService. Then you would get rid of all the tedious delegations.
      Or you can rewrite the code in Ruby 🙂

      Otherwise I don’t think it is worth the effort to write test for code like this that is very unlikely to be broken. But if you do TDD then you’re using the tests to drive the design. The process is valuable even if the tests that come out of it is quite pointless.

  • danielsaidi 9:16 am on October 6, 2011 Permalink | Reply
    Tags: , cql, , static   

    Tweaking the NDepend CQL rules to leverage awesome power 

    I have previously written about automating and scheduling NDepend for a set of .NET solutions.

    After getting into the habit of using NDepend to find code issues instead of going through the code by hand (which I still will do, but a little help does not hurt), the power of CQL grows on me.

    For instance, one big problem that I have wrestled with is that our legacy code contains static fields for non-static-should-be properties. In web context. Enough said.

    Prior to using CQL, I used to search for “static” for the entire solution, go through the search result (which, naturally, also included fully valid static methods and properties) and…well, it really did not work.

    Yesterday, when digging into the standard CQL rules to get a better understanding of the NDepend analysis, I noticed the following standard CQL:

    // <Name>Static fields should be prefixed with a 's_'</Name>
     !NameLike "^s_" AND 
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    Although NDepend’s naming conventions do not quite fit my conventions, this rule is just plain awesome. I just had to edit the CQL to

    // <Name>Static fields should not exist...mostly</Name>
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    and voilá – NDepend will now automatically find all static fields within my solution…and ignore any naming convention.

    Since this got me going, I also went ahead to modify the following rule

    // <Name>Instance fields should be prefixed with a 'm_'</Name>
     !NameLike "^m_" AND 
     !IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    to instead require that fields are camel cased (ignoring the static condition as well):

    // <Name>Instance fields should be camelCase</Name>
     !NameLike "^[a-z]" AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 

    Two small changes to the original setup, but awesomely helpful. Another great thing is that when you edit the queries in VisualNDepend, you get immediate, visual feedback to how the rule applies to the entire solution.

    So, now I can start tweaking the standard CQL rules to that they conform to my conventions. However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

    I will investigate this further and write a blog post if I happen to solve it.

    • Patrick Smacchia 2:31 pm on October 6, 2011 Permalink | Reply

      >However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

      There is no simple way yet to share a set of custom rules accross several projects.
      The options are:
      A) to copy/paste them accross NDepend running instances,
      B) to tweak the simple .ndproj project XML somehow to share the rules.

      We are aware of this limitation and are working on a feature that will make this easier.

      • danielsaidi 10:41 pm on October 10, 2011 Permalink | Reply

        Hi Patrick! I just wanted to let you know that today’s analysis worked great, without any problems whatsoever. My computer started up the bat script automatically, which checked out and built the latest source code, then ran an analys for each of the seven solutions…and finally posted the result on an internal server. Good stuff! 🙂

    • danielsaidi 3:44 pm on October 6, 2011 Permalink | Reply

      It is really not a problem – if the rule addresses some convention that the code does not share, one will get numerous warnings and can then decide whether or not the warnings are an issue or not. Also, I have modified the rules a bit differently in some projects, so maybe it is a good thing that one has to do these modifications for each project.

      I will gather my rules in simple text files that I keep under version control, together with my NDepend projects. I will probably also add the more general ones as resources to my .NExtra project.

      This is fun! 🙂

  • danielsaidi 2:57 pm on October 5, 2011 Permalink | Reply
    Tags: , , , system architecture, task scheduler   

    Scheduling NDepend for a set of solutions 

    In a project of mine, I use NDepend to continuously analyze a set of solutions that make up some of the the software infrastructure of a major Swedish company.

    By scheduling the analyses to run once a week, using previous analyses as a baseline for comparison, I hope that this will make it easier to detect less favorable patterns that we want to avoid and pin-point good ones that we want to embrace.

    Although we use Team City as build server, I have setup the scheduled analyses to run from my personal computer during this first test phase. It is not optimal, but for now it will do.

    The analyses are triggered from a simple bat script, that does the following:

    • It first checks out each solution from TFS
    • It then builds each solution with devenv
    • It then run a pre-created NDepend analysis for each solution
    • Each analysis is configured to publish the HTML report to a web server that is available for everyone within the project
    Once I had created the script, I scheduled it using the Task Scheduler.  I set it to run every Monday morning at 8.30. Since it runs from my personal computer, I have to be early at work, but with two kids at home, I always am 🙂

    The scheduled script works like a charm. The analyses runs each week and everyone is happy (at least I am). Already after the first analysis, we noticed some areas that we could modify to drastically improve the architecture, reduce branch/merge hell, code duplication etc.

    Who knows what we will find after some incremental analyses? It is exciting, to say the least!

    One small tweak

    During the experimentation phase, when the report generation sometimes did not work, I was rather annoyed when NDepend did not run a new analysis, since no code had changed. The solution was simple – under Tools/Options/Anaysis, tell NDepend to always run a full analysis:

    In most cases, though, the default setting is correct, since it will run a full analysis at least once per day. However, in this case, I keep the “Always Run Full Analysis” selected for all NDepend projects.

    One final, small problem – help needed!

    A small problem that still is an issue, is that my NDepend projects sometimes begin complaining that the solution DLL:s are invalid…although they are not. The last time this happened (after the major architectural changes), it did not matter if I deleted and re-added the DLL:s – the project still considered them to be invalid. I had to create the delete the NDepend projects and re-create them from scratch to make them work.

    Has anyone had the same problem, and any idea what this could be about? Why do the NDepend projects start complaining about newly built DLL:s?

    • Patrick Smacchia 4:23 pm on October 5, 2011 Permalink | Reply

      One remark: the incremental analysis option, is only valid in the standalone or VS addin context, no in the build server context of running an analysis through NDepend.Console.exe

      Next time it tells you an assembly is invalid, take your machine and shake it (but please stay polite with it)!

      If it still doesn’t work, go in the NDepend project Properties panel > Code to analyze > invalid assemblies should appear with a red icon, hovering an invalid assembly with the mouse will show you a tooltip that explains the problem (the problem will also be shown in the info panel).

      My bet is that several different versions of an invalid assembly are present in the set of the ndproj project dirs, where NDepend searches for assemblies (hence NDepend doesn’t know which version to choose).

      • danielsaidi 8:19 am on October 6, 2011 Permalink | Reply

        Kudos for your fast response, Patrick!

        Since I run the analyses locally, the incremental analysis option will apply if I do not disable it. However, the fact that ND will still run a full analysis at least once a day, means that I could enable the option once again, after the initial setup phase.

        I had a looong look at the failing assemblies prior to writing this post. ND complained about that multiple versions of some assemblies existed, even after I confirmed that the specified paths in fact did not. After I recreated the NDepend project, and re-added the assemblies – everything worked once again.

        I will have a look at how ND handles the assemblies next Monday, and let you know. I have an SSD in my machine, so I’ll first try to give it a rather rough shake 🙂

        Other than that, I am looking forward to start modifying the CQL rules now. I love the comment in the “Instance fields should be prefixed with a ‘m_'” rule! 🙂

    • Patrick Smacchia 8:46 am on October 6, 2011 Permalink | Reply

      >I will have a look at how ND handles the assemblies next Monday, and let you know

      Ok, sounds good, let us know

      >I love the comment in the “Instance fields should be prefixed with a ‘m_’” rule!


  • danielsaidi 4:49 pm on August 28, 2011 Permalink | Reply
    Tags: , editorblockfor, editorfor, , html helper, labelfor,   

    EditorBlockFor HTML helper 

    In ASP.NET MVC, Microsoft has done a great job with the various HTML helpers that can be used in a form context, such as LabelFor, EditorFor, ValidationMessageFor etc.

    However, despite these helpers, the HTML markup still tend to become rather tedious and repetitive. For instance, this HTML generates a form that can be used t0 create groups in a web application that I am currently working on:

        @using (Html.BeginForm())
            <div class="editor-label">
                @Html.LabelFor(model => model.Name)
            <div class="editor-field">
                @Html.EditorFor(model => model.Name)
                @Html.ValidationMessageFor(model => model.Name)
            <div class="editor-label">
                @Html.LabelFor(model => model.CollectionName)
            <div class="editor-field">
                @Html.EditorFor(model => model.CollectionName)
                @Html.ValidationMessageFor(model => model.CollectionName)
            <div class="form-buttons">
                <input type="submit" value="@this.GlobalResource(Resources.Language.Create)" />

    That is quite a lot of code for handling two single properties…and the two editor blocks look rather similar, don’t you think?

    I therefore decided to write a small HTML helper extension method – EditorBlockFor – that can be used to generate an editor block (label, editor and validation message).

    Using this new helper, the resulting form becomes a lot shorter and a lot easier to handle:

        @using (Html.BeginForm())
            @Html.EditorBlockFor(model => model.Name);
            @Html.EditorBlockFor(model => model.CollectionName);
            <div class="form-buttons">
                <input type="submit" value="@this.GlobalResource(Resources.Language.Create)" />

    As you see, the method is only to be used if you want to follow the conventions that are used for auto-generated ASP.NET MVC form code. But if you do…you can save a lot of keystrokes.

    I am not that familiar with the MvcHtmlString type, which the native methods return, so returning an IHtmlString instead of MvcHtmlString could be a big no no that I do not know about.

    Please let me know if I have ruined the order of the universe.

  • danielsaidi 10:06 pm on August 10, 2011 Permalink | Reply
    Tags: assemblyinfo, sharedassemblyinfo   

    How to put shared .NET solution assembly information into one single file 

    When working with .NET solutions that contains several projects, I have found it to be a real hassle to manage version numbers and other shared information for the various assemblies. Each time I want to change the version, I have had to open up the AssemblyInfo.cs file of each project and edit the changed information.

    Well, from now on, it’s going to be a walk in the park, since I finally sat down for ten minutes and figured out how to share assembly information between projects. Ten minutes, that’s all it took to find the piece of info I needed and apply it to my .NET Extensions library.

    So, how to do it? The answer is here, thanks Jeremy!

    In short, the secret is to create a shared assembly information file (e.g. SharedAssemblyInfo.cs in the solution root folder or another solution folder) and link it into each project. To keep things concise, move it into the Properties folder of each project after linking it into the project. Once the shared information file is in place, make sure to delete any duplicate information from your original AssemblyInfo.cs file.

    That is all there is to it. Quite simple, right?

  • danielsaidi 10:40 pm on July 24, 2011 Permalink | Reply
    Tags: clone, , copy,   

    Cloney – clone your .NET solutions in no time 

    When working with .NET, I sometimes find myself wanting to clone a solution. I am not talking about code duplication, although I know that cloning a .NET solutions also means duplicating code, but just hear me out.

    For instance, you may want to clone a similar project or a project stub, where you can reuse code that should not be extracted into a base library, a convenient, frequently used project structure, 3rd part component setup etc.

    If you have never felt this need, feel free to read on if you find the topic interesting 🙂

    In my opinion, the biggest problem with cloning a .NET solution by copying it to a new folder, is that you have to replace everything that has to do with the old namespace. For instance, if you have a solution called X, where X is the base namespace, X can contain several projects, such as X.Core, X.Domain etc. If you clone X and call the clone Y, the solution must be renamed along with all projects and everything that relates to the old X namespace.

    I therefore decided to create a small application that makes cloning a .NET solution a walk in the park. It is currently just a try-out beta that can can be downloaded at https://danielsaidi.github.com/Cloney or at http://github.com/danielsaidi/cloney

    With Cloney, you just have to point out a source folder that contains the solution you want to clone, as well as a target folder to where you want to clone the solution. When you then press “Clone”, Cloney will:

    • Copy all folders and files from the source folder
    • Ignore certain folders, such as bin, obj, .git, .svn, _Resharper*
    • Ignore certain file types, such as *.suo, *.user, *,vssscc
    • Replace the old namespace with the new one everywhere

    You then end up with a fresh, clean solution without a trace of old settings, version control-related folders and files etc.

    Feel free to download Cloney and give it a try. If you like it, let me know. If you hate it…well, I guess you should let me know that as well.

    • Adam Webber 3:50 am on December 27, 2014 Permalink | Reply

      Smart utility! Microsoft should simply buy out the program and install ‘Cloney’ in each instance of Visual Studio. I was able to simply name the target folder of destination. No pre-make directory required! Source folder was easy to navigate w built in finder. Once satisfied with name of folder source, and name of folder target, the program took over and performed the perfect clone. Saved me hours of porting over via add existing files, etc. Simply point to your My Websites directory, select the name of your source and assign the name to your target. Can’t get much simpler than that! 4-stars for easy of use … 5-stars if I experience no bugs when working on my new solution.

      • danielsaidi 5:40 pm on January 17, 2015 Permalink | Reply

        So happy to hear that Cloney worked so well for you. Was it a 5 star experience in the end? 🙂

    • Adam Webber 9:11 pm on January 18, 2015 Permalink | Reply

      Still working w new target … no bugz yet … Must be a winner!

    • Ray 6:01 pm on February 15, 2015 Permalink | Reply

      I have used Cloney again and again and it works like a charm every time. Thanks for sharing such a great utility. I love the simplicity and the fact that IT WORKS. Thanks again.

    • Will 5:00 am on February 19, 2016 Permalink | Reply

      Maybe I’m missing something, but It’s just copying the solution and changing the solution file name, but none of the project names or namespaces are changing. The solution is named X while the projects are named X.Web, X.Api..etc. I was expecting it to change the Solution to Y and the projects/namespaces to Y.Web, Y.Api. I checked the command line parameters and I didn’t see any switches to change project/namespaces. This is a VS2015 solution…could that be why?

      • danielsaidi 10:54 pm on March 1, 2016 Permalink | Reply

        Hi Will! No, in that case something is wrong. It replaces the solution name everywhere it appears – files, folders, namespaces, projects etc. So, if your solution is named Solution1 and has two projects – Solution1.Foo and Solution1.Bar, and you clone it to a folder called Solution2, the new solution should be named Solution2 and the projects Solution2.Foo and Solution2.Bar.

        It seems like you have named your projects in that matter and that you have read up on how the cloner works, but it really should not depend on any VS version, since it only sweeps through the file system. I will have a look whenever I find the time. Thanks for getting in touch!

  • danielsaidi 1:48 pm on July 7, 2011 Permalink | Reply
    Tags: il instructions,   

    NDepend and IL instructions 

    I am currently running NDepend on various .NET solutions, to get into the habit of interpreting the various metric results.

    Today, NDepend warned me that a GetHtml method was too complex. It had 206 IL instructions, where 200 is NDepend’s default upper limit. Since this was the only method that NDepend considered too complex, I decided to check it out.

    The method was quite straightforward, but one obvious improvement would be replace all conditional appends. However, it only reduced the number of IL instructions with one per removal and resulted in a method with almost twice as many lines as the original one.

    I therefore decided to restore the method to its original state and look elsewhere.

    Next, a IsNullOrEmpty string extension method turned out to be a small contributor. Instead of being a plain shortcut to String.IsNullOrEmpty, it automatically handled string trimming as well, which made it a bit more complex.

    I now realized that this auto-trimming was crazy, bad and really, really wrong, so I decided to rewrite the method and watched the number of IL instructions drop to 202. Still, we need to improve  the code further.

    Another improvement would be to tweak how comparisons were made. After re-running the analysis, I then had a green light! The number of instructions was now 197. NDepend was satisfied. Still…197? Seems like many instructions for a method with 15 lines of code.

    After some internal discussions, a colleague of mine suggested that I should extract the various attribute handlings to separate methods…and the number of IL instructions dropped to 96.

    Can we reduce them further? 🙂

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc