Tagged: ndepend Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 9:03 pm on January 20, 2012 Permalink | Reply
    Tags: afferent coupling, cvs, , efferent coupling, , , , kinect, ndepend, , stack overflow, tim huckaby, windows 8   

    Øredev 2011 in the rear-view mirror – Part 5 

    Øredev logo

    This is the fifth part of my sum-up of Øredev 2011. Read more by following the links below:

    So, yeah…this sum-up was supposed to be a rather short thing, but has grown out of proportions. I will try to keep it short, but the sessions deserve to be mentioned.

     

    2:7 – Greg Young – How to get productive in a project in 24h

    In his second session at Øredev, Greg spoke about how to kick-start yourself in new projects. First of all, do you understand what they company does? Have you used your the product or service before? If you do not have an understanding of these fundamental facts, how will you deliver value?

    Then, Greg described how he quickly get up in the saddle. He usually start of with inspecting the CVS. Project that have been around for a while and still have tons of checkins, could possibly be suffering from a lot of bugs. Certain areas that are under the burden of massive amounts of checkins could possibly be bug hives.

    Note these things, it could quickly tell you where in the project it hurts. Naturally, a lot of checkins do not automatically indicate bugs or problems. The team could just be developing stuff. However, it will give you someplace to start. At least, a lot of checkins do mean that someone is working in that particular part of the project.

    These are very simple steps to take, but to actually be able to discuss the project after just an hour or so with the CVS, and maybe even pin-pointing some problems, will give your customer the impression that you are almost clairvoyant…or at least that you know what you are doing, which is why they pay you.

    If the project is not using continuous integration, at least set it up locally. It does not take long and will help you out tremendously. To be able to yell out to someone who breaks the build the second they do it…well, it will give you pleasure at least.

    Greg then went on to demonstrate how you can dig even deeper, and his tool of the day was NDepend. Greg’s demo was awesome, and Patrick should consider giving him…well, a hug at least. I, who without that much success demonstrated NDepend in my organization a while back, could quickly tell that I have a long way to go in how you present a tool to people.

    With NDepend, Greg demonstrated how to use the various metrics, like cyclomatic complexity and afferent/efferent coupling. He went through the various graphs, describing what they do and how they can be used and told us to specially look out for black squares in the dependency matrix (they indicate circular references) and concrete couplings (they should be broken up).

    All in all a very, very great session that also gave me a lot of things to aim for when holding presentations myself. As a consultant, you should not miss this video.

     

    3:1 – Keynote – Jeff Atwood – Stack Overflow: Social Software for the Anti-Social Part II: Electric Boogaloo

    I will not attempt to cover everything said in this keynote. Instead, you should go here and wait for the video. It is filled with fun gems, like when Jeff describes how stuff that are accepted in a web context would be really strange if applied in real life. For instance, FB lets you keep a list of friends. Who has a physical list of friends IRL?

    Anyway, Jeff spoke about gamification and how we can design our service like a game, using a set of rules to define how it is meant to be used, award those who adapt to the rules…and punish the ones that do not. The basic premise is that games have rules and games are fun…so if we design our web sites as games, they should become fun as well.

    Well, at least, rules drastically simplifies how we are supposed to behave. It tells us what to do. Sure, it does not work for all kinds of sites, but for social software, gamification should be considered. Games, in general make social interaction non-scary, since everyone has to conform to the rules. Just look at the world, and you will know that this is true.

    So, when designing Stack Overflow, Jeff and Joel did so with gamification in mind. You may not notice it at first, but everything there is carefully considered. For instance, people use to complain that you cannot add a new question at the very start page. This is intentional. Before you add a question, Stack wants you to read other questions, see how people interact and learn the rules.

    Stack adapt several concepts from the gaming world. Good players are awarded with achievements and level up as they progress. There are tutorials, unlockables etc. Without first realizing it, Jeff and Joel ended up creating a Q&A game that consists of several layers:

    • The game – ask and answer questions
    • The meta-game – receive badges, level up, become an administrator etc.
    • The end-game – make the Internet a little better

    This design makes it possible for Stack Overflow to allow anonymous users, unlike Facebook who decided to only allow real names in order to filter out the “idiots”. Since Stack Overflow award good players, bad players are automatically sorted out. The community is self-sanitizing. People are awarded with admin status if they play good enough. It’s just like Counter Strike, where you are forced to be a team player. If you are not, the game will kill you 🙂

    I could go on and on, but Jeff says it best himself. Although some parts are simply shameless Stack commercial, I recommend you to check out the video.

     

    3:2 – Tim Huckaby – Building HTML5 Applications with Visual Studio 11 for Windows 8

    Tim has worked with (not at) Microsoft for a loooong time and is one charismatic guy, I must say. What I really appreciated with his session was that it seemed a bit improvised, unlike most sessions at Øredev. What I did not like quite as much, though, was that it seemed too improvised. Due to lack of time and hardware issues, Tim failed to demonstrate what I came to see – HTML5 applications with VS11.

    Tim begun with stating that he hates HTML…but that he loves HTML5, which is “crossing the chasm”. This means that it is a safe technology to bet on, because it will be adapted. How do we know? Well, the graph below illustrates when a technology is “crossing the chasm” in relation to how people adapt it:

    The Chasm Graph :)
    So when a technology is “crossing the chasm”, get to work – it will be used 🙂 I wonder how the graph would have looked for HD-DVD? Tim also thanked Apple for inventing the iPad (which he calls a $x couch computer). Thanks to the iPhone and the iPad, Flash and plugins are out and HTML5 is in.

    Large parts of the sessions were fun anecdotes, like when he spoke about how Adobe went out with a “we <heart> Apple” campaign and Apple responded with an “we <missing plugin> Adobe”. Hilarious, but did we learn anything from these anectodes? Well, time will tell.

    • Tim went through some browser statistics, explained why IE6 is still so widely used (damn those piracy copies of Win XP in China)…and ended up with some small demos, but faced massive hardware problems and promised us some more meat if we stayed a while. I stayed a while (I even attended the next Tim session) but the demos were not that wow.

    So, how did Tim do in his second session? Read on!

     

    3:3 – Tim Huckaby – Delivering Improved User Experience with Metro Style Win 8 Applications

    Tim started this session talking about NUI – Natural User Interfaces and some new features of Windows 8, like semantic zoom, a desktop mode behind Metro (it looks great, just like Win 7!), smart touch and…a new task manager (he was kinda ironic here).

    Tim demonstrated Tobii on a really cool laptop with two cameras, which allow it to see in 3D.  The rest of the session was…enjoyable. I cannot put my finger on it, but I had fun, although I was disappointed at what was demonstrated. The Kinect demo was semi-cool, a great Swedish screen was also interesting and Tim also hinted about how the new XBOX Loop and a new Kinect will become a small revolution.

    I really do not know what to say about this. Watch the video. You will have fun.

     
  • danielsaidi 4:12 pm on December 8, 2011 Permalink | Reply
    Tags: circular namespace dependencies, , implementation, , ndepend   

    NDepend getting my juices flowing 

    I have been using NDepend to analyse the latest version of .NExtra. The code is spotless (this is where you should detect the irony), but the analysis is highlighting some really interesting design flaws that I will probably fix in the next major version.

    First of all, almost all cases where method or type names are too long are where I have created facades or abstractions for native .NET entities, like Membership. Here, I simply add exceptions where NDepend will not raise any warnings, since the whole point with the facades are that they should mirror the native classes, to simplify switching them out with your own implementations, mocks etc.

    Second, NDepend warns for naming conventions that I do not agree with, such as that method names should start with m_. Here, I simply remove the rules, since I do not find them valid. However, when I look at it after removing the rules, I could have kept them and renamed them and make them check that my own conventions are followed. I will probably do so later on.

    Third, and the thing I learned the most from today, was that I turned out to have circular namespace dependencies, despite that I have put a lot of effort into avoiding circular namespace and assembly dependencies. The cause of the circular dependencies turned out to be between X and X/Abstractions namespaces.

    Circular dependency graph

    The base and abstraction namespaces depend on eachother

    For instance, have a look at NExtra.Geo, which contains geo location utility classes. The namespace contains entities like Position and enums like DistanceUnits, as well as implementations of the interfaces in the Abstractions sub namespace.

    Now, what happens is that the interfaces define methods that use and return types and entities in the base namespace. At the same time, the implementations in the base namespace implement the interfaces in the Abstractions namespace. And there you go, circular namespace dependencies.

    Now, in this particular case, I do not think that it is too much of a deal, but it highlights something that has annoyed me a bit while working with the .NExtra library.

    In the library’s unit tests, the test classes only knows about the interfaces, but the setup method either selects a concrete implementation or a mock for the various interfaces. For an example, look here. This force me to refer both the base namespace as well as the abstractions namespace. Once again, this is not that bad, but it raises the question…

    In the next minor version of .NExtra, I will probably get rid of the abstraction namespaces, since they add nothing but complexity. Sure, they tuck away all interfaces, but why should they?

     
    • Daniel Lee 6:40 pm on December 8, 2011 Permalink | Reply

      Were you inspired by Greg Young’s session at Öredev? I have to get into NDepend as well. Did you buy a license or you testing out the trial version?

    • danielsaidi 9:30 am on December 9, 2011 Permalink | Reply

      I have been using NDepend for a while, but yeah, Greg’s session opened up my eyes for other ways of looking at the metrics. And even if I remove some metrics (like methods should start with m_, static members with s_ etc.) it is really striking how some of the metrics really highlight design flaws that are easily corrected.

  • danielsaidi 9:16 am on October 6, 2011 Permalink | Reply
    Tags: , cql, ndepend, static   

    Tweaking the NDepend CQL rules to leverage awesome power 

    I have previously written about automating and scheduling NDepend for a set of .NET solutions.

    After getting into the habit of using NDepend to find code issues instead of going through the code by hand (which I still will do, but a little help does not hurt), the power of CQL grows on me.

    For instance, one big problem that I have wrestled with is that our legacy code contains static fields for non-static-should-be properties. In web context. Enough said.

    Prior to using CQL, I used to search for “static” for the entire solution, go through the search result (which, naturally, also included fully valid static methods and properties) and…well, it really did not work.

    Yesterday, when digging into the standard CQL rules to get a better understanding of the NDepend analysis, I noticed the following standard CQL:

    // <Name>Static fields should be prefixed with a 's_'</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^s_" AND 
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    Although NDepend’s naming conventions do not quite fit my conventions, this rule is just plain awesome. I just had to edit the CQL to

    // <Name>Static fields should not exist...mostly</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    and voilá – NDepend will now automatically find all static fields within my solution…and ignore any naming convention.

    Since this got me going, I also went ahead to modify the following rule

    // <Name>Instance fields should be prefixed with a 'm_'</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^m_" AND 
     !IsStatic AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject 
    
    // This naming convention provokes debate.
    // Don't hesitate to customize the regex of 
    // NameLike to your preference.

    to instead require that fields are camel cased (ignoring the static condition as well):

    // <Name>Instance fields should be camelCase</Name>
    WARN IF Count > 0 IN SELECT FIELDS WHERE 
     !NameLike "^[a-z]" AND 
     !IsLiteral AND 
     !IsGeneratedByCompiler AND 
     !IsSpecialName AND 
     !IsEventDelegateObject

    Two small changes to the original setup, but awesomely helpful. Another great thing is that when you edit the queries in VisualNDepend, you get immediate, visual feedback to how the rule applies to the entire solution.

    So, now I can start tweaking the standard CQL rules to that they conform to my conventions. However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

    I will investigate this further and write a blog post if I happen to solve it.

     
    • Patrick Smacchia 2:31 pm on October 6, 2011 Permalink | Reply

      >However, when looking at the two rules above, where my versions should apply to all future NDepend projects that I create from now on, is there some way to globally replace the standard CQLs with my alternatives?

      There is no simple way yet to share a set of custom rules accross several projects.
      The options are:
      A) to copy/paste them accross NDepend running instances,
      B) to tweak the simple .ndproj project XML somehow to share the rules.

      We are aware of this limitation and are working on a feature that will make this easier.

      • danielsaidi 10:41 pm on October 10, 2011 Permalink | Reply

        Hi Patrick! I just wanted to let you know that today’s analysis worked great, without any problems whatsoever. My computer started up the bat script automatically, which checked out and built the latest source code, then ran an analys for each of the seven solutions…and finally posted the result on an internal server. Good stuff! 🙂

    • danielsaidi 3:44 pm on October 6, 2011 Permalink | Reply

      It is really not a problem – if the rule addresses some convention that the code does not share, one will get numerous warnings and can then decide whether or not the warnings are an issue or not. Also, I have modified the rules a bit differently in some projects, so maybe it is a good thing that one has to do these modifications for each project.

      I will gather my rules in simple text files that I keep under version control, together with my NDepend projects. I will probably also add the more general ones as resources to my .NExtra project.

      This is fun! 🙂

  • danielsaidi 2:57 pm on October 5, 2011 Permalink | Reply
    Tags: , , ndepend, system architecture, task scheduler   

    Scheduling NDepend for a set of solutions 

    In a project of mine, I use NDepend to continuously analyze a set of solutions that make up some of the the software infrastructure of a major Swedish company.

    By scheduling the analyses to run once a week, using previous analyses as a baseline for comparison, I hope that this will make it easier to detect less favorable patterns that we want to avoid and pin-point good ones that we want to embrace.

    Although we use Team City as build server, I have setup the scheduled analyses to run from my personal computer during this first test phase. It is not optimal, but for now it will do.

    The analyses are triggered from a simple bat script, that does the following:

    • It first checks out each solution from TFS
    • It then builds each solution with devenv
    • It then run a pre-created NDepend analysis for each solution
    • Each analysis is configured to publish the HTML report to a web server that is available for everyone within the project
    Once I had created the script, I scheduled it using the Task Scheduler.  I set it to run every Monday morning at 8.30. Since it runs from my personal computer, I have to be early at work, but with two kids at home, I always am 🙂

    The scheduled script works like a charm. The analyses runs each week and everyone is happy (at least I am). Already after the first analysis, we noticed some areas that we could modify to drastically improve the architecture, reduce branch/merge hell, code duplication etc.

    Who knows what we will find after some incremental analyses? It is exciting, to say the least!

    One small tweak

    During the experimentation phase, when the report generation sometimes did not work, I was rather annoyed when NDepend did not run a new analysis, since no code had changed. The solution was simple – under Tools/Options/Anaysis, tell NDepend to always run a full analysis:

    In most cases, though, the default setting is correct, since it will run a full analysis at least once per day. However, in this case, I keep the “Always Run Full Analysis” selected for all NDepend projects.

    One final, small problem – help needed!

    A small problem that still is an issue, is that my NDepend projects sometimes begin complaining that the solution DLL:s are invalid…although they are not. The last time this happened (after the major architectural changes), it did not matter if I deleted and re-added the DLL:s – the project still considered them to be invalid. I had to create the delete the NDepend projects and re-create them from scratch to make them work.

    Has anyone had the same problem, and any idea what this could be about? Why do the NDepend projects start complaining about newly built DLL:s?

     
    • Patrick Smacchia 4:23 pm on October 5, 2011 Permalink | Reply

      One remark: the incremental analysis option, is only valid in the standalone or VS addin context, no in the build server context of running an analysis through NDepend.Console.exe

      Next time it tells you an assembly is invalid, take your machine and shake it (but please stay polite with it)!

      If it still doesn’t work, go in the NDepend project Properties panel > Code to analyze > invalid assemblies should appear with a red icon, hovering an invalid assembly with the mouse will show you a tooltip that explains the problem (the problem will also be shown in the info panel).

      My bet is that several different versions of an invalid assembly are present in the set of the ndproj project dirs, where NDepend searches for assemblies (hence NDepend doesn’t know which version to choose).

      • danielsaidi 8:19 am on October 6, 2011 Permalink | Reply

        Kudos for your fast response, Patrick!

        Since I run the analyses locally, the incremental analysis option will apply if I do not disable it. However, the fact that ND will still run a full analysis at least once a day, means that I could enable the option once again, after the initial setup phase.

        I had a looong look at the failing assemblies prior to writing this post. ND complained about that multiple versions of some assemblies existed, even after I confirmed that the specified paths in fact did not. After I recreated the NDepend project, and re-added the assemblies – everything worked once again.

        I will have a look at how ND handles the assemblies next Monday, and let you know. I have an SSD in my machine, so I’ll first try to give it a rather rough shake 🙂

        Other than that, I am looking forward to start modifying the CQL rules now. I love the comment in the “Instance fields should be prefixed with a ‘m_'” rule! 🙂

    • Patrick Smacchia 8:46 am on October 6, 2011 Permalink | Reply

      >I will have a look at how ND handles the assemblies next Monday, and let you know

      Ok, sounds good, let us know

      >I love the comment in the “Instance fields should be prefixed with a ‘m_’” rule!

      🙂

  • danielsaidi 1:48 pm on July 7, 2011 Permalink | Reply
    Tags: il instructions, ndepend   

    NDepend and IL instructions 

    I am currently running NDepend on various .NET solutions, to get into the habit of interpreting the various metric results.

    Today, NDepend warned me that a GetHtml method was too complex. It had 206 IL instructions, where 200 is NDepend’s default upper limit. Since this was the only method that NDepend considered too complex, I decided to check it out.

    The method was quite straightforward, but one obvious improvement would be replace all conditional appends. However, it only reduced the number of IL instructions with one per removal and resulted in a method with almost twice as many lines as the original one.

    I therefore decided to restore the method to its original state and look elsewhere.

    Next, a IsNullOrEmpty string extension method turned out to be a small contributor. Instead of being a plain shortcut to String.IsNullOrEmpty, it automatically handled string trimming as well, which made it a bit more complex.

    I now realized that this auto-trimming was crazy, bad and really, really wrong, so I decided to rewrite the method and watched the number of IL instructions drop to 202. Still, we need to improve  the code further.

    Another improvement would be to tweak how comparisons were made. After re-running the analysis, I then had a green light! The number of instructions was now 197. NDepend was satisfied. Still…197? Seems like many instructions for a method with 15 lines of code.

    After some internal discussions, a colleague of mine suggested that I should extract the various attribute handlings to separate methods…and the number of IL instructions dropped to 96.

    Can we reduce them further? 🙂

     
  • danielsaidi 8:45 pm on October 7, 2010 Permalink | Reply
    Tags: assembly metrics, constraints, , instability, metrics, ndepend, review, type metrics   

    Getting started with NDepend 

    So, after quite some time, I have finally got my thumb out and added an NDepend project to my new .NET Extensions 2.0 solution, in order to get some analyzing done before releasing it.

    The first thing that hit me was how easy it was to attach a new NDepend project to my solution. I just had to:

    1. Install Visual NDepend
    2. Open my .NET Extensions solution
    3. Under the new “NDepend” main menu item, select “Attach new NDepend project to solution”
    4. Select all projects of interested (I chose all of them)
    5. Press the OK button and pray to god.

    Once NDepend is attached to your solution, the menu will change and look like this:

    The NDepend menu

    The NDepend menu

    …but before that, it will perform a first-time analysis of all projects that it is set to handle. This is done automatically, so just bind NDepend to your solution, and it will perform the analysis…

    …after which Firefox (or, naturally, your default browser of choice) will come to life and display a complete analysis summary, which is generated in a folder called NDependOut:

    The generated NDependOut folder

    The generated NDependOut folder

     

    So, what the report has to say?

    Report sections

    The report is divided into certain sections, of which more (to me) are more interesing than others.

    The various sections of the NDepend report

    The various sections of the NDepend report

     

    Application metrics

    Well…first of all, a complete textual summary of application metrics is presented:

    The NDepend Application Metrics summary

    The NDepend Application Metrics summary

     

    Now, this summary contains a couple of interesting metrics.

    Note, for instance, the comment ratio (51%). I have always taken pride in commenting my code, but lately, I have focused on writing readable code instead 🙂  However, I have decided to overdo the commenting in this project, since it must be understandable for users that only get their hands on the DLL.

    Since .NET Extensions is mainly an extension project, I think that this summary is quite what I expected…even if I maybe could do with a bit more interfaces. Note that not much is going un “under the hood” – almost everything is public (in some cases for unit test purposes, which I will change #beginner-error).

    Also, since I am of the belief that one should NEVER work directly towards an object’s fields, I am happy that I have no public fields at all  🙂

    The last row (cut of) displays the method/function with the worst cyclomatic complexity:

    The worst CC in the solution

    The worst CC in the solution

    However, when I analyze the method with the Visual Studio analyzer, it says that the method has a CC of 13! Turns out that 25 is the ILCC (Intermediate Language Code Complexity) – the resulting CC. However, I will write a new blog post later on, in which I’ll use this information to improve the GetHtml() method. Stay tuned 🙂

     

    Assembly metrics + abstraction/stability summary

    After the interesting application metrics summary come some assembly metrics (also quite interesting) as well as information about for the stability of the different assemblies.

    First of all, everything is presented a textual grid:

    The NDepend Assembly metrics table

    The NDepend Assembly metrics table

    This information is then displayed in various graphical components, such as the Visual NDepend view (in Visual Studio, you can use NDepend to navigate through this view)…

    The Visual NDepend View

    The Visual NDepend View

    …as well as the Abstractness vs. Instability view…

    The Abstractness vs. Instability view

    The Abstractness vs. Instability view

     

    Now, let’s stop for a moment and discuss this graph. The word “instability” first made me feel like I had written the worst piece of junk there is, but I think that the word is quite misleading.

    As I’ve mentioned, the analyzed solution consists of a lot of extension and helper classes, which almost never are independent – they mostly depend on other base classes, since that is their purpose. If I have understood the term “instability” correctly, this is exactly what it means. The solution is instable since it depends on a lot of other components…

    …but for this kind of solution, it is hard to have it any other way. After reflecting over the graph a bit, and enjoying the green color (except for the so far empty build project), I understood what view intends to display.

     

    Dependencies, build order etc.

    This part of the report is probably a lot more interesting if you intend to delve into a solution of which development you have not been a part of earlier on.

    However, for this particular situation, this part of the report really did not give me anything that I did not already know.

     

    Amazing final part – constraints

    Finally, NDepend displayes an amazing part where the code is evaluated according to all existing constraints.

    For instance…

     

    One of the vast number of constraint summaries

    One of the vast number of constraint summaries

     

    …this part displays a constraint that selects all functions that:

    • Has more than 30 lines of code OR
    • Has more than 200 IL instructions OR
    • Has a cyclomatic complexity over 20 OR
    • Has an IL cyclomatic complexity over 50 OR
    • Has an IL nesting depth that is larger than 4 OR
    • Has more than 5 parameters OR
    • Has more than 8 variables OR
    • Has more than 6 overloads

    For instance, the first item in the list – Split() – is there because it has more than8 variables.

    Some of the default constraints are perhaps a bit harse, but most of them are really useful. Just having a look at these constraints and at how your code applies to them gives you a deeper understanding of how you (or your team) writes code.

     

    Type metrics

    Finally, comes an exhausting, thorough grid with ALL the information you can imagine about every single type in the solution.

    Type metrics

    Type metrics

    The “worst” cells in each category are highlighted, which makes it really easy to get an overview of the entire framework…although the information is quite massive.

     

    Conclusion

    Well, only having connected an NDepend project to my solution, I have barely scratched the surface of what NDepend can offer. To be able to extract this much info by just pressing a button, is quite impressive.

    I wrote this blog post yesterday, and have rewritten large parts of it today. During that time span, my stance towards it has shifted a bit.

    Yesterday, since I then did not understand parts of the report, I was under the impression that your own hobby project is not the best context in which to use NDepend…and that it comes to better use when working in a role (e.g. tech lead / lead developer) that require you be able to quickly extract data about the systems for which you are responsible. In such a context, NDepend is greaaat.

    However, after getting some time to “feel” how NDepend “feels” for me as a developer, I have started to see benefits even for a solution such as this extensions solution. As I will show in a future blog post, I can use the information I extract from NDepend to detect the parts of my framework that are “worst”…and makes it easy to adjust them, re-analyze them and see how my implementation grows better.

    It is a bit like comparing my iPhone with my iPad. ReSharper was like my phone – as soon as I started using it, I could not live without it. NDepend, on the other hand, is much like the iPad. At first, I really could not see the use, but after som time, it finds right into your day-to-day life and…after a while…becomes natural.

    I will scratch myself further down into NDepend. Stay tunes.

     
    • danielsaidi 9:11 pm on October 7, 2010 Permalink | Reply

      As you can see, this theme sucks for wide content…and code. Any advices??

    • Mattias 10:31 pm on October 7, 2010 Permalink | Reply

      The mobile theme on my iPhone doesn’t make it better. 😉

    • Mattias 6:59 am on October 8, 2010 Permalink | Reply

      Have you tried google.com? 😉

    • danielsaidi 8:06 am on October 8, 2010 Permalink | Reply

      Nooooo, but I often get letmegooglethatforyou.com links from my friends and collegoues 😉 Hmmm…maybe it’s worth paying those dollars to WordPress to unlock the custom CSS feature. 😛

  • danielsaidi 9:45 am on August 16, 2010 Permalink | Reply
    Tags: ndepend,   

    Getting started with NDepend 

    I have just installed NDepend v3 Professional and will have a look at it. It seems to contain some cool features (and the integration with Visual Studio is slick) but I will look for some good tutorials that can help me get started.

    The features video that is available at the NDepend web site is quite thorough (although the voice seems to be generated), but if anyone has worked with NDepend v3, feel free to provide me with information about how to best make use of this seemingly great tool.

    Now, let’s analyze some previous code and find flaws that R# has missed 🙂

     
  • danielsaidi 8:01 am on August 3, 2010 Permalink | Reply
    Tags: ndepend   

    Getting started with NDepend 

    I have been meaning to check out NDepend for quite some time, but now is finally the time to do so. Although I am excited to check out the new (well, fairly new) v3, I will start with actually reading…not so common these days 🙂

    This blog post by Patrick is a couple of years old, but the introduction that he mentions is, I believe, a good start to get an idea of the main features. After that, the new features of v3 will not be too far away. I will let you know how things progress.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel