Updates from danielsaidi Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 8:31 am on January 24, 2017 Permalink | Reply  

    This blog is moving 

    I have finally got around to begin moving this blog to my own web site, where I get much more control over my code-focused posts. You can find this new blog at my personal site:


    While WordPress has a bunch of great tools for blogging, being able to write posts in plain text and MarkDown will be a real boost for me.

    I will delete blog posts as I move them. All discussions will be copied over with the blog.

  • danielsaidi 2:10 pm on May 4, 2012 Permalink | Reply
    Tags: code, source code,   

    Posting source code on a WordPress hosted blog 

    As you probably have noticed, this is a blog about code…mostly. I post a lot of code here…but the code parts of the blog have been terribly formatted.

    The reason to this is that I host my blog at wordpress.com. This means that I serve under some limitations, such as only getting to select from a couple of themes (actually, there are tons of themes to choose from), not being able to add my own plugins etc. So, when it comes to code, I have not found a plugin that could be used to handle the code formatting. To make the code stand out, I have instead wrapped the code in pre tags and highlighted the code with a distinct color, but I have disliked that approach a lot.

    A couple of weeks ago, however, I found this blog post by the WordPress team themselves. Turns out that posting source code and getting it to format correctly is a walk in the park, using the sourcecode markup.

    By wrapping your code inside a:

    [code language=”css”]
    your code here

    WordPress will format the source code correctly. There are a number of language formats to choose from, such as C#, Java, JavaScript, Python, CSS, XML etc. etc. etc. See the original blog post for examples.

    Thanks, WordPress!

  • danielsaidi 8:28 pm on April 16, 2012 Permalink | Reply
    Tags: , xcode   

    Adding older iOS simulators to Xcode 

    When developing my latest app, I have a device and a simulator that runs iOS 4.1. However, I have to be able to test it on older iOS versions.

    Luckily, you can install more iOS simulators. In Xcode, just choose

    Xcode > Preferences > Downloads

    Here, you can install more simulators, debuggers etc.

    Xcode > Preferences > Downloads

    So, as you can see…I am off to multi-simulator heaven!

  • danielsaidi 9:03 pm on March 13, 2012 Permalink | Reply
    Tags: , , photo library   

    Adding photos to the iPhone simulator 

    I am currently developing an iPhone app that will make use of the camera. Since I am also testing it in the simulator, I want to be able to select pictures from the photo library as well, if the camera is missing.

    However, once I open up the UIImagePickerControllerSourceTypePhotoLibrary in the iPhone simulator, I am presented with the following screen:


    No Photos iPhone screen

    No Photos – You can sync photos and videos onto your iPhone using iTunes.


    Uhm, can I (sync photos and videos to the simulator using iTunes)? I have not found a way, but there is an easy way to work around this and fill your iPhone with photos.

    Just open up Finder and drag any image you want to add to your simulator. When you see the green plus icon, just release the image and it will open up in Safari, like this:


    Safari browser screenshot

    The Safari browser shows the image that was dragged to the simulator.


    Now, click the image and keep the mouse button pressed, and you will get the option to save it to the simulator:


    Save option

    Press and hold the left mouse button to open the save and copy action sheet


    That’s it! If you open up the photo library, you will see the image in your list of saved images:


    Photo library

    The photo is added to the photo library


    Hope it helps!

  • danielsaidi 7:49 pm on February 29, 2012 Permalink | Reply
    Tags: gloss, icon, remove, shine   

    Remove gloss effect for iOS app icon 

    There is a really easy alternativ to this approach. The property list approach still work, but if you select the project root (the topmost one with the blue icon in the project navigator) and select the “Summary” tab, the iPhone and iPad icons can be set. Check the “Prerendered” box to disable the automatic glow effect.

    After a long time away, I started looking at iOS development once again…great fun.

    However, less fun and really unintuitive is how you remove the “gloss” effect for your application icon.

    This is how you do it:

    • Right-click your application Info.plist file.
    • Select “Open As / Source Code”
    • Add the following two lines anywhere (I add them after <key>CFBundleIconFiles</key><array/>):

    Voilá – you’re done!

    This must be one of the most hidden features I have ever come across. Does anyone know another way to do this?

    • zubii 5:57 am on January 28, 2013 Permalink | Reply

      Help! I did exactly as you did but when I tried to open it as a property list again it said “The data couldn’t be read because it has been corrupted”. It also wouldn’t build to run on the iOS Simulator. Am I doing something wrong?

      • danielsaidi 6:33 am on January 28, 2013 Permalink | Reply

        Oh, now there is actually a really easy way of doing this. The property list approach still work, but if you select the project root (the topmost one with the blue icon in the project navigator) and select the “Summary” tab, the iPhone and iPad icons can be set. Check the “Prerendered” box to disable the automatic glow effect.

        As for the corrupt warning, you probably entered some invalid piece of xml. If you right-click the file and select “Open as > Property List” this will probably not work as well, because the file contains invalid xml. Can you undo your changes? If so, undo and disable the glow effect as I described above. If now, remove everything you manually added so that the file is as it was back when it worked.

        Good luck.

    • zubii 6:55 am on January 28, 2013 Permalink | Reply

      Thanks so much, it worked. Yes, I was able to erase the changes, thanks. Thanks again.

  • danielsaidi 9:37 pm on February 27, 2012 Permalink | Reply
    Tags: , dependency injection, , , inversion of control, sub routines   

    Dependency Injection gone too far? 

    I am currently working with a new version of a hobby console app of mine, that should execute certain actions depending on the input arguments. Now, I wonder if I am taking the concept of dependency injection too far.

    Which dependencies should you inject, and which should you keep as invisible dependencies?

    How does Cloney work?

    The Cloney console application will do different things depending on the input arguments it is provided with. For instance, if I enter

     cloney --clone --source=c:/MyProject --target=d:/MyNewProject

    Cloney will clone the solution according to certain rules.

    To keep the design clean and flexible, I introduced the concept of sub routines. A sub routine is a class that implement the ISubRoutine interface, which means that it can be executed using the input arguments of the console application. For instance, the CloneRoutine listens for the input arguments above, while the HelpRoutine triggers on cloney –help.

    When I start the application, Cloney fetches all ISubRoutine implementation and tries to execute each with the input arguments. Some may trigger, some may not. If no routine triggers, Cloney displays a help message.

    So, what is the problem?

    Well, there is really no problem…just different ways to do things.

    For instance, when it comes to parsing the input arguments and making them convenient to handle, I use a class call CommandLineArgumentParser, which implements ICommandLineArgumentParser. The class transforms the default string array to a dictionary, which makes it easy to map an arg key to an arg value.

    Using the class is a choice each sub routine must take. The interface just defines the following method:

     bool Run(IEnumerable<string> args)

    Yeah, that’s right. Each sub routine just acts like a program of its own. As far as the master program is concerned, it just delegates the raw argument array it receives to each sub routine. How the routine handles the arguments is entirely up to it.

    The old design – DI for all (too much?)

    Previously, the CloneRoutine had two constructors:

       public CloneRoutine()
       : this(Default.Console, Default.Translator, Default.CommandLineArgumentParser, Default.SolutionCloner) { ... } 
       public CloneRoutine(IConsole console, ITranslator translator, ICommandLineArgumentParser argumentParser, ISolutionCloner solutionCloner) { ... }

    Since a sub routine is created with reflection, it must provide a default constructor. Here, the default constructor use default implementations of each interface, while the custom constructor is used in the unit tests and supports full dependency injection. Each dependency is exposed and pluggable.

    So, what is the problem?

    Well, I just feel that since the command line arguments are what defines what the routine should do, letting the class behavior be entirely determined by how another component parses the input arguments, makes the class unreliable.

    If I provide the class with an implementation that returns an invalid set of arguments, even if I provide the routine with arguments that it should trigger on (and especially considering that the return value is a do-as-you-please IDictionary), the class may explode due to an invalid implementation.

    It that not bad?

    The new design – DI where I think it’s needed (enough?)

    Instead of the old design, it this not better:

       public CloneRoutine() :this(Default.Console, Default.Translator, Default.SolutionCloner) { ... }
       public CloneRoutine(IConsole console, ITranslator translator, ISolutionCloner solutionCloner) {
          this.argumentParser = Default.CommandLineArgumentParser;

    This way, I depend on the choice of ICommandLineArgumentParser implementation that I have made in the Default class, but if that implementation is incorrect, my unit tests will break. The other three injections (IMO) are the ones that should be exchangable. The argument parser should not be.

    Is this good design, or am I doing something terribly bad by embedding a hard dependency, especially since all other component dependencies can be injected. Please provide me with your comments regarding this situation.

    • Henrik 12:05 am on February 28, 2012 Permalink | Reply

      I think this looks good! What you can do is either Property Injection (i.e. make argumentParser a public property which can be set by the test code when needed) or the Extract and Override technique (i.e make argumentParser a protected virtual property and then make a testable class that inherits from your “production” class (e.g. CloneRoutineStub : CloneRoutine) and then overrides that virtual property.

      Or am I getting your question wrong?

    • danielsaidi 7:21 am on February 28, 2012 Permalink | Reply

      Thanks for commenting, Henrik! I guess my question was this: when should you allow behavior to be injected and when should you not go down that path.

      For the example above, I think injecting the IConsole, ITranslator and ISolutionCloner implementations is fine, since they define responsibility that SHOULD be delegated by the class.

      However, I think that acting on the input arguments received in the Run method should be the responsibility of the class, and should not be injectable.

      If the routine chooses a certain component to parse arguments is absolutely fine (and I kind of have an DI model since the routine uses the Default.CommandLineArgumentParser), but it should not be exposed.

      If I allow the argument parsing behavior to be injectable, I can make the class stop working in really strange ways, since the parser has to parse the arguments in a very specific way. IMO, the argument parser differs from the other three components.

      So….do you agree? 🙂

    • Henrik 8:18 am on February 28, 2012 Permalink | Reply

      I agree! I think it’s perfectly okay! Context is king!
      It’s not a self-purpose to require dependencies to be injected.

      Maybe you want me to disagree, so we get this lovely war feeling? 🙂

      • danielsaidi 8:41 am on February 28, 2012 Permalink | Reply

        I do love the war feeling, but sometimes, getting along is a wonderful thing as well. 🙂

        System design sure is tricky sometimes. I really appreciate having you and other devs to discuss these things with.

    • Daniel Lee 9:42 am on February 28, 2012 Permalink | Reply

      I can’t see why you would ever need to switch out the command line argument parser. And as you say yourself, it feels more like core logic than a dependency.

      So you made the right decision.

      (Although, in such a small codebase as Cloney, I don’t know if this really matters all that much?)

    • danielsaidi 10:01 am on February 28, 2012 Permalink | Reply

      Nice, thanks Daniel 🙂

      I think I’ll roll with this design for now, and replace it whenever I see the need to. I do not like all these hard dependencies to the Default class – it’s like making the classes dependent on the existence of StructureMap.

      However, as you say, it IS core logic. Cloney is not a general library, so these kinds of dependencies may not be as bad as if I’d have the same design in, say, a general lib like .NExtra.

    • Johan Driessen 12:39 pm on February 28, 2012 Permalink | Reply

      If your main concern is that your unit tests will be more fragile, and break if your implementation of the argument parser is incorrect (or just changes), why don’t you just make the argumentParser-field protected virtual?

      Then you can just create a “TestableCloneRoutine” in your tests, that inherits from CloneRoutine and replaces the argumentparser with a stub, so that your tests don’t become dependant on the actual implementation, while still not making the internals visible to other classes.

      AKA “extract and override”.

      • Johan Driessen 12:41 pm on February 28, 2012 Permalink | Reply

        Actually, you would have to make argumentParser a property (still protected and virtual) and have it return Default.CommandLineArgumentParser in CloneRoutine.

        • danielsaidi 2:39 pm on February 28, 2012 Permalink

          Thanks for your input, Johan. I will try to clarify my main concern regarding the design.

          If we see to the unit tests, I think that the following expression is a good test scenario:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console”

          In my unit tests, for instance, I can then trigger the Run method with various arguments and see that my IConsole mock receives a call to WriteLine only when I provide the method with valid input.

          If, on the other hand, the argument parse behavior is exposed, the HelpRoutine will communicate that it has an IArgumentParser that parses a string[] to an IDictionary. IMO, this is not relevant.

          Furthermore, if I make the parser injectable, my test scenario would rather be expressed like this:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console if the argument parser it uses parses the array to an IDictionary where the “help” key is set to true.”

          I am not sure which test scenario I prefer. The second one is more honest, since the routine’s behavior IS based on the parser’s behavior…but is that really what I want to test?

          I considered re-adding the IArgumentParser as a constructor parameter again, just to make it possible to inject it, but I am not really sure. I see benefit with this, as I do with keeping it completely internal.

          IMO, the fact that the routine uses an ArgumentParser to parse the arguments should not be of any concern to anyone but the class. It’s the resulting behavior that should matter.

          But I have a split feeling about it all.

  • danielsaidi 1:03 pm on February 22, 2012 Permalink | Reply
    Tags: assembly version, boo, , nextra, , nuget package explorer, phantom,   

    Use Phantom/Boo to automatically build, test, analyze and publish to NuGet and GitHub 

    When developing my NExtra .NET library hobby project, I used to handle the release process manually. Since a release involved executing unit tests, bundling all files, zipping and uploading the bundle to GitHub, creating new git tags etc. the process was quite time-consuming and error-prone.

    But things did not end there. After adding NExtra to NuGet, every release also involved refreshing and publishing six NuGet packages. Since I used the NuGet Package Explorer, I had to refresh the file and dependency specs for each package. It took time, and the error risk was quite high.

    Since releasing new versions involved so many steps, I used to release NExtra quite seldom.

    Laziness was killing it.

    The solution

    I realized that something had to be done. Unlike at work, where we use TeamCity for all solutions, I found a build server to be a bit overkill. However, maybe I could use a build script for automating the build and release process?

    So with this conclusion, I defined what the script must be able to help me out with:

    • Build and test all projects in the solution
    • Automatically extract the resulting version
    • Create a release folder or zip with all files
    • Create a new release tag and push it to GitHub
    • Create a NuGet package for each project and publish to NuGet

    The only piece of the release process not covered by this process was to upload the release zip to GitHub, but that would be a walk in the park once the build script generated a release zip.

    The biggest step was not developing the build script. In fact, it is quite a simple creation. Instead, the biggest step was to come to the conclusion that I needed one.

    Selecting a build system

    In order to handle my release process, I needed a build system. I decided to go with Phantom, since I use it at work as well. It is a convenient tool (although a new, official version would be nice) that works well, but it left me with an annoying problem, which I will describe further down.

    So, I simply added Phantom 0.3 to a sub folder under the solution root. No config is needed – the build.bat and build.boo (read on) files take care of everything.

    The build.bat file

    build.bat is the file that I use to trigger a build, build a .zip or perform a full publish from the command prompt. I placed it in
    the solution root, and it looks like this.

    @echo off
    :: Change to the directory that this batch file is in
    for /f %%i in ("%0") do set curpath=%%~dpi
    cd /d %curpath%
    :: Fetch input parameters
    set target=%1
    set config=%2
    :: Set default target and config if needed
    if "%target%"=="" set target=default
    if "%config%"=="" set config=release
    :: Execute the boo script with input params - accessible with env("x")
    resources\phantom\phantom.exe -f:build.boo %target% -a:config=%config%


    Those of you who read Joel Abrahamsson’s blog, probably recognize the first part. It will move to the folder that contains the .bat file, so that everything is launched from there.

    The second section fetches any input parameters. The target param determines the operation to launch (build, deploy, zip or publish) and config what kind of build config to use (debug, release etc.)

    The third section handles param fallback in case I did not define some of the input parameters. This means that if I only provide a target, config will fall back to “release”. If I define no params at all, target will fall back to “default”.

    Finally, the bat file calls phantom.exe, using the build.boo file. It tells build.boo to launch the provided “target” and also sends “config” as an environment variable (the -a:config part).

    All in all, the build.bat file is really simple. It sets a target and config and uses the values to trigger the build script.

    The build.boo file

    The build.boo build script file is a lot bigger than the .bat file. It is also located in the solution root and looks like this:

    import System.IO
    project_name = "NExtra"
    assembly_file = "SharedAssemblyInfo.cs"
    build_folder = "_tmpbuild_/"
    build_version = ""
    build_config = env('config')
    test_assemblies = (
    target default, (compile, test):
    target zip, (compile, test, copy):
     zip("${build_folder}", "${project_name}.${build_version}.zip")
    target deploy, (compile, test, copy):
     with FileList(build_folder):
     .ForEach def(file):
    target publish, (zip, publish_nuget, publish_github):
    target compile:
     msbuild(file: "${project_name}.sln", configuration: build_config, version: "4")
     //Probably a really crappy way to retrieve assembly
     //version, but I cannot use System.Reflection since
     //Phantom is old and if I recompile Phantom it does
     //not work. Also, since Phantom is old, it does not
     //find my plugin that can get new assembly versions.
     content = File.ReadAllText("${assembly_file}")
     start_index = content.IndexOf("AssemblyVersion(") + 17
     content = content.Substring(start_index)
     end_index = content.IndexOf("\"")
     build_version = content.Substring(0, end_index)
    target test:
     nunit(assemblies: test_assemblies, enableTeamCity: true, toolPath: "resources/phantom/lib/nunit/nunit-console.exe", teamCityArgs: "v4.0 x86 NUnit-2.5.5")
     exec("del TestResult.xml")
    target copy:
     File.Copy("README.md", "${build_folder}/README.txt", true)
     File.Copy("Release-notes.md", "${build_folder}/Release-notes.txt", true)
     with FileList(""):
     .ForEach def(file):
     File.Copy(file.FullName, "${build_folder}/${file.Name}", true)
    target publish_nuget:
     File.Copy("README.md", "Resources\\README.txt", true)
     File.Copy("Release-notes.md", "Resources\\Release-notes.txt", true)
     exec("nuget" , "pack ${project_name}\\${project_name}.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.web\\${project_name}.web.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.mvc\\${project_name}.mvc.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.wpf\\${project_name}.wpf.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.webforms\\${project_name}.webforms.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.winforms\\${project_name}.winforms.csproj -prop configuration=release")
     exec("nuget push ${project_name}.${build_version}.nupkg")
     exec("nuget push ${project_name}.web.${build_version}.nupkg")
     exec("nuget push ${project_name}.mvc.${build_version}.nupkg")
     exec("nuget push ${project_name}.wpf.${build_version}.nupkg")
     exec("nuget push ${project_name}.webforms.${build_version}.nupkg")
     exec("nuget push ${project_name}.winforms.${build_version}.nupkg")
     exec("del *.nupkg")
     exec("del Resources\\README.txt")
     exec("del Resources\\Release-notes.txt")
    target publish_github:
     exec("git add .")
     exec('git commit . -m "Publishing ${project_name} ' + "${build_version}" + '"')
     exec("git tag ${build_version}")
     exec("git push origin master")
     exec("git push origin ${build_version}")

    Topmost, we see a system import. This will allow us to use System.IO for file operations. After that, I define some variables and a list of test assemblies that I want to test.

    Two variables worth mentioning is the build_version, which is set in the compile step, as well as build_config, which is set by the input parameter defined in build.bat.

    The next section of the file defines all public targets, that are intended to be callable by the user. These map directly to target in build.bat.

    Of course, all targets further down can be called as well – there are no such thing as public or private targets. Still, that would probably not be a very good idea.

    If we look at the public targets, we have:

    • default – Executes “compile” and “test”
    • zip – Executes “compile” and “test”, then creates a zip file
    • deploy – Executes “compile” and “test” then creates a folder
    • publish – Executes “zip”, then publishes to NuGet and GitHub

    If we look at the private targets (that do the real work) we have:

    • compile – Compiles the solution and extract the version number
    • test – Runs the NUnit builtin with the .NExtra test assemblies
    • copy – Copies all relevant files to the temporary build_folder
    • publish_nuget – Pack and publish each .NExtra project to NuGet
    • publish_github – Commit all changes, create a tag then push it

    It is not that complicated, but it is rather much. You could take the bat and boo file and tweak it, and it would probably work for your projects as well.

    However, read on for some hacks that I had to do to get the build process working as smoothly as it does.

    One assembly file to rule them all

    A while ago, I decided to extract common information from each of the .NExtra projects into a shared assembly file.

    The shared assembly file looks like this:

    using System.Reflection;
    // General Information about an assembly is controlled through the following
    // set of attributes. Change these attribute values to modify the information
    // associated with an assembly.
    [assembly: AssemblyCompany("Daniel Saidi")]
    [assembly: AssemblyProduct("NExtra")]
    [assembly: AssemblyCopyright("Copyright © Daniel Saidi 2009-2012")]
    [assembly: AssemblyTrademark("")]
    // Make it easy to distinguish Debug and Release (i.e. Retail) builds;
    // for example, through the file properties window.
    #if DEBUG
    [assembly: AssemblyConfiguration("Debug")]
    [assembly: AssemblyConfiguration("Retail")]
    // Version information for an assembly consists of the following four values:
    // Major Version
    // Minor Version
    // Build Number
    // Revision
    // You can specify all the values or you can default the Build and Revision Numbers
    // by using the '*' as shown below:
    [assembly: AssemblyVersion("")]
    [assembly: AssemblyFileVersion("")]

    The file defines shared assembly information like version, to let me specify this once for all projects. I link this file into each project and then remove the information from the project specific assembly info file.

    Since the .NExtra version management is a manual process (the way I want it to be), I manage the .NExtra version here and parse the file during the build process to retrieve the version number. The best way would be to use System.Reflection to analyze the library files, but this does not work, since Phantom uses .NET 3.5.

    I tried re-compiling Phantom to solve this, but then other things started to crash. So…the file parse approach is ugly, but works.

    Tweaking NuGet

    After installing NuGet, typing “nuget” in the command prompt will still cause a warning message to appear, since “nuget” is unknown.

    To solve this, either add the NuGet executable path to PATH or be lazy and use the nuget.exe command line bootstrapper, which finds NuGet for you. You can download it from CodePlex or grab it from the .NExtra Resources root folder.

    Regarding each project’s nuspec file, they were easily created by calling “nuget spec x” where x is the path to the project file. A nuspec file is then generated. I then added some information that cannot be extracted from the assembly, like project URL, icon etc. for each of these generated spec files.


    This post became a rather long, but I hope that it did explain my way of handling the .NExtra release process.

    Using the build script, I can now call build.bat in the following ways:

    • build – build and test the solution
    • build zip – build and test the solution and generate a nextra.<version>.zip file
    • build deploy – build and test the solution and generate a nextra.<version>.zip folder
    • build publish – the same as build zip, but also publishes to NuGet and GitHub.

    The build script has saved me immense amount of work. It saves me time, increases quality by reducing the amount of manual work and makes releasing new versions of .NExtra a breeze.

    I still have to upload the zip to the GitHub download area, but I find this to be a minimum task compared to all other steps. Maybe I’ll automate this one day as well, but it will do for now.

    I strongly recommend all projects to use a build script, even for small projects where a build server is a bit overkill. Automating the release process is a ticket to heaven.

    Or very close to that.

    • Markus Johansson 10:07 am on July 16, 2013 Permalink | Reply

      Great post! Looks as a really nice release script! Thanks for sharing your experiance!

      • danielsaidi 8:33 am on July 20, 2013 Permalink | Reply

        Thanks! 🙂 Once all pieces are in place, publishing new releases is a breeze.

  • danielsaidi 8:43 pm on January 21, 2012 Permalink | Reply
    Tags: , doc list, gamification, ginger cake, jim benson, kanban, , short software half-life, spike and stabilize   

    Øredev 2011 in the rear-view mirror – Part 6 

    Øredev logo

    This is the sixth and final part of my sum-up of Øredev 2011. Read more by following the links below:

    This is the final part of my Øredev sum-up. It will cover the last three sessions I attended to and conclude my visit to Øredev.


    3:4 – Jim Benson – Healthy Projects

    After gathering some key-words from the audience, Jim’s defined healthy projects to be:

    • Happy
    • Productive
    • Stress-free
    • Focused
    • Nice to the workers

    He gave a good description of when things tend to go wrong within an organization, visualized with the following organizational structure (a single node topmost and several nodes bottommost):

    • Company (has many portfolios)
    • Portfolios (has many projects)
    • Projects (has many tasks)
    • Tasks

    Imagine someone working at task level being “promoted” to project level, e.g. becoming product owner. If this person cannot understand his new area of work and keep focusing on the details of task level, it will lead to micro management. The same goes when moving from project to portfolio and portfolio to company level.

    Speaking about rules, Jim states that when you add a lot of rules, you will also have to introduce a process. If the rules are then hard to follow, people will fail…and when they do, they will blame the process. Methods like Kanban, for instance, visualize the work, minimize the amount of ongoing work and lead to healthier projects.

    A good technique that can be used to visualize how the team feels is to have it marking the scrum/kanban notes with an illustration that describe how they feel after completing a task. I found this to be a very good idea! It is so simple, yet communicates so clearly how the team is feeling.

    This session grew on me afterwards. While sitting there, I found it hard to stay focused and found large parts of the session rather passable, but after reading my notes afterwards, I found some golden gems.


    3:5 – Doc List – Development is a game

    Okey, so this session was about Doc having an idea…and wanted a lot of stuff to happen. His question was, how do we measure how good we are at what we do and what are the tools of measurement that we should use? Certificates? Level of success?

    Doc asks us – why can’t life itself be a game? Why can’t we have rewards in our professions (actually, Visual Studio has just introduced achievements, so we are getting there)? Why can’t we have quests? Want to measure how good a person is – give him a quest! Want to measure how good a team is – give them a group quest!

    Doc wants to create a globally applicable system that ranks people according to what they know. With this system, if you need “a level 24 Java developer”, you will have a specification of what a level 24 Java developer knows…and a list of persons who are at that level (since it is measurable). Doc wants to build a global community for this and wants…

    …well, there you have my biggest problem with this session. Doc is a really charming man who has been around a while and has a great rep, but…he wants a lot of things and talks about it without having created nothing so far. So, he just describes a vision.

    I could have found the session interesting, and Doc convincing, if he at least had started. So, I eagerly await Doc to prove me wrong and announce that he has started working on that global system of his. Until then, I will lay my focus elsewhere.


    3:6 – Dan North – Pattern of effective delivery

    With Dan’s keynote being the undeniable highlight of Øredev for everyone that I know saw it (I did not, unfortunately), I really looked forward to this session…as did the entire Øredev. The room was packed.

    Dan spoke of some exciting new patterns, like:

    • Spike and Stabilize (easy, semi-effective) – try something out, then build it well. Optimize for discovery.
    • Ginger Cake (semi-hard, semi-effective) – break the rules once you go senior…”it’s like a chocolate cake, but with ginger”
    • Short software half-life – how long does it take, before you have to fix a bug? Optimize for throwawayability.

    No, in fact, I did not find this to be a interesting session at all. In fact, I found it rather pointless, which was a huge disappointment.

    Dan, like many of the big speakers, is very charming and passionate when on stage…but I cannot help to feel that I perhaps should choose more concrete sessions than these “inspired and fun” ones, the next time I attend to one of these conferences. I am obviously not the target audience.

    Please watch the video? Do you disagree with me? Let me know in the comment field below.



    Øredev 2011 was a fantastic conference, with high mountains and, unfortunately, some rather deep valleys. Next year, I hope to see even more local talents, and more odd and exciting selections of speakers. How about a grave russian who (in bad English) demonstrates some kick-ass piece of technology without one joke being said or charming smile being fired?

    I would like to see that. Maybe next time.

    Anyway, a big, BIG thank to the Øredev crew – you delieved a really inspiring conference that I still return to mentally.

    • Henrik 9:11 pm on January 21, 2012 Permalink | Reply

      Agree, Dan’s keynote was much better. This was a bit hard to grasp.

  • danielsaidi 9:03 pm on January 20, 2012 Permalink | Reply
    Tags: afferent coupling, cvs, , efferent coupling, , , , kinect, , , stack overflow, tim huckaby, windows 8   

    Øredev 2011 in the rear-view mirror – Part 5 

    Øredev logo

    This is the fifth part of my sum-up of Øredev 2011. Read more by following the links below:

    So, yeah…this sum-up was supposed to be a rather short thing, but has grown out of proportions. I will try to keep it short, but the sessions deserve to be mentioned.


    2:7 – Greg Young – How to get productive in a project in 24h

    In his second session at Øredev, Greg spoke about how to kick-start yourself in new projects. First of all, do you understand what they company does? Have you used your the product or service before? If you do not have an understanding of these fundamental facts, how will you deliver value?

    Then, Greg described how he quickly get up in the saddle. He usually start of with inspecting the CVS. Project that have been around for a while and still have tons of checkins, could possibly be suffering from a lot of bugs. Certain areas that are under the burden of massive amounts of checkins could possibly be bug hives.

    Note these things, it could quickly tell you where in the project it hurts. Naturally, a lot of checkins do not automatically indicate bugs or problems. The team could just be developing stuff. However, it will give you someplace to start. At least, a lot of checkins do mean that someone is working in that particular part of the project.

    These are very simple steps to take, but to actually be able to discuss the project after just an hour or so with the CVS, and maybe even pin-pointing some problems, will give your customer the impression that you are almost clairvoyant…or at least that you know what you are doing, which is why they pay you.

    If the project is not using continuous integration, at least set it up locally. It does not take long and will help you out tremendously. To be able to yell out to someone who breaks the build the second they do it…well, it will give you pleasure at least.

    Greg then went on to demonstrate how you can dig even deeper, and his tool of the day was NDepend. Greg’s demo was awesome, and Patrick should consider giving him…well, a hug at least. I, who without that much success demonstrated NDepend in my organization a while back, could quickly tell that I have a long way to go in how you present a tool to people.

    With NDepend, Greg demonstrated how to use the various metrics, like cyclomatic complexity and afferent/efferent coupling. He went through the various graphs, describing what they do and how they can be used and told us to specially look out for black squares in the dependency matrix (they indicate circular references) and concrete couplings (they should be broken up).

    All in all a very, very great session that also gave me a lot of things to aim for when holding presentations myself. As a consultant, you should not miss this video.


    3:1 – Keynote – Jeff Atwood – Stack Overflow: Social Software for the Anti-Social Part II: Electric Boogaloo

    I will not attempt to cover everything said in this keynote. Instead, you should go here and wait for the video. It is filled with fun gems, like when Jeff describes how stuff that are accepted in a web context would be really strange if applied in real life. For instance, FB lets you keep a list of friends. Who has a physical list of friends IRL?

    Anyway, Jeff spoke about gamification and how we can design our service like a game, using a set of rules to define how it is meant to be used, award those who adapt to the rules…and punish the ones that do not. The basic premise is that games have rules and games are fun…so if we design our web sites as games, they should become fun as well.

    Well, at least, rules drastically simplifies how we are supposed to behave. It tells us what to do. Sure, it does not work for all kinds of sites, but for social software, gamification should be considered. Games, in general make social interaction non-scary, since everyone has to conform to the rules. Just look at the world, and you will know that this is true.

    So, when designing Stack Overflow, Jeff and Joel did so with gamification in mind. You may not notice it at first, but everything there is carefully considered. For instance, people use to complain that you cannot add a new question at the very start page. This is intentional. Before you add a question, Stack wants you to read other questions, see how people interact and learn the rules.

    Stack adapt several concepts from the gaming world. Good players are awarded with achievements and level up as they progress. There are tutorials, unlockables etc. Without first realizing it, Jeff and Joel ended up creating a Q&A game that consists of several layers:

    • The game – ask and answer questions
    • The meta-game – receive badges, level up, become an administrator etc.
    • The end-game – make the Internet a little better

    This design makes it possible for Stack Overflow to allow anonymous users, unlike Facebook who decided to only allow real names in order to filter out the “idiots”. Since Stack Overflow award good players, bad players are automatically sorted out. The community is self-sanitizing. People are awarded with admin status if they play good enough. It’s just like Counter Strike, where you are forced to be a team player. If you are not, the game will kill you 🙂

    I could go on and on, but Jeff says it best himself. Although some parts are simply shameless Stack commercial, I recommend you to check out the video.


    3:2 – Tim Huckaby – Building HTML5 Applications with Visual Studio 11 for Windows 8

    Tim has worked with (not at) Microsoft for a loooong time and is one charismatic guy, I must say. What I really appreciated with his session was that it seemed a bit improvised, unlike most sessions at Øredev. What I did not like quite as much, though, was that it seemed too improvised. Due to lack of time and hardware issues, Tim failed to demonstrate what I came to see – HTML5 applications with VS11.

    Tim begun with stating that he hates HTML…but that he loves HTML5, which is “crossing the chasm”. This means that it is a safe technology to bet on, because it will be adapted. How do we know? Well, the graph below illustrates when a technology is “crossing the chasm” in relation to how people adapt it:

    The Chasm Graph :)
    So when a technology is “crossing the chasm”, get to work – it will be used 🙂 I wonder how the graph would have looked for HD-DVD? Tim also thanked Apple for inventing the iPad (which he calls a $x couch computer). Thanks to the iPhone and the iPad, Flash and plugins are out and HTML5 is in.

    Large parts of the sessions were fun anecdotes, like when he spoke about how Adobe went out with a “we <heart> Apple” campaign and Apple responded with an “we <missing plugin> Adobe”. Hilarious, but did we learn anything from these anectodes? Well, time will tell.

    • Tim went through some browser statistics, explained why IE6 is still so widely used (damn those piracy copies of Win XP in China)…and ended up with some small demos, but faced massive hardware problems and promised us some more meat if we stayed a while. I stayed a while (I even attended the next Tim session) but the demos were not that wow.

    So, how did Tim do in his second session? Read on!


    3:3 – Tim Huckaby – Delivering Improved User Experience with Metro Style Win 8 Applications

    Tim started this session talking about NUI – Natural User Interfaces and some new features of Windows 8, like semantic zoom, a desktop mode behind Metro (it looks great, just like Win 7!), smart touch and…a new task manager (he was kinda ironic here).

    Tim demonstrated Tobii on a really cool laptop with two cameras, which allow it to see in 3D.  The rest of the session was…enjoyable. I cannot put my finger on it, but I had fun, although I was disappointed at what was demonstrated. The Kinect demo was semi-cool, a great Swedish screen was also interesting and Tim also hinted about how the new XBOX Loop and a new Kinect will become a small revolution.

    I really do not know what to say about this. Watch the video. You will have fun.

  • danielsaidi 5:31 pm on January 18, 2012 Permalink | Reply
    Tags: microsoft commerce server   

    Microsoft Commerce Server, anyone? 

    I am currently working on an e-commerce solution that is based on Microsoft Commerce Server 2007 SP2. Without prior experience of MSCS, and without being the one setting up the solution, I am at a loss regarding some issues that we are trying to solve.

    Anonymous baskets

    A big problem for us, and a strange one to solve, is that 200.000 anonymous baskets are automatically generated every night! This occurs in all environments – locally, at the test and stage servers as well as in production. The basket creation occurs at the same time every night, which (duh) indicates a scheduled event of some kind.

    My developers have not been able to track down what is causing this. Instead, they have created a scheduled task that empties anonymous baskets that are not used. So, we have not fixed the problem, we are just cleaning up the mess.

    These auto-generated baskets did cause the MSCS database to grow to insane levels. Our scheduled task have brought it back to normal, but the dream scenario would naturally be to be able to track down what is happening and simply solve the problem. Since it happens locally, we can exclude any of the import jobs that run continuously, as well as any externally exposed web services.

    Has anyone experienced this behavior with MSCS 2007 before? If so, I would appreciate a push in the right direction.

    Slow data operations

    Our load tests show that the site has become a bit slower since the new version was launched in May. Sure, the devs have added a lot of new functionality, but when I analyze the data operations that take the longest time to execute, it turns out that MSCS is the real bottleneck. Profile creation can take up large parts of the execution time when a view is built, and product sub categories are really slow to load.

    For a system like MSCS, is it really realistic to assume that the database has become that much slower in just six months. The MSCS database has not undergone any optimizations during this time, but should it really be necessary? We are bringing in a SQL optimizer, but if anyone has experienced that MSCS slows down due to bad indices or so, I’d love to hear more about it.

    • Ben Taylor 1:14 pm on January 23, 2012 Permalink | Reply

      I would wager that you create an anonymous basket each time you get a new visitor. You probably then store something in a cookie and pull the anon basket out each time they return. Problem is, this fails when you are hit 200,000 by a web crawler that does not support cookies 🙂

      If you’ve not been working on your SQL housekeeping and tuning, then I’m sure that will be part of the slowdown issue. You may also be using more expensive API calls. I would suggest you profile the site using a good profiler. A good caching strategy is also a winner. However, caching CS objects is memory intensive. You may want to just cache the bits of data you need for the page.

      • danielsaidi 4:12 pm on January 23, 2012 Permalink | Reply

        Ben, thank you SO much for your comment! I believe that you pin-pointed the problem with the anonymous baskets and gave the developers a kick in the right direction.

        We have not confirmed it yet, but we do have an external search service that crawls through the site every night. When the developers read your response, they immediately started investigating whether or not that service is what could be causing the problem. We will know more tomorrow 🙂

        Also, big thanks for your other advices. We will allocate resources for optimizing the databases, which have been cluttered with anonymous baskets (and cleaned up continously) for over half a year. I think that this will make the databases a bit faster.

    • Ben Taylor 3:40 pm on January 24, 2012 Permalink | Reply

      Glad to have (hopefully) been of assistance.

      If you guys ever need an awesome promotion engine for Commerce Server check us out http://www.enticify.com/

      Good luck!

    • ikilic 9:09 am on May 27, 2013 Permalink | Reply

      We are having problems with Microsoft Commerce Server 2009 freetextsearch, the proplem is searching for a single character ,
      For example we can search for iphone 4S but not for iphone 4, we are using AND clause

      Hope you can help us.

      • danielsaidi 8:28 pm on May 28, 2013 Permalink | Reply

        Hi, I sadly cannot help you guys with this, since I first of all have not had that particular problem and also have not worked with MS Commerce Server for a looong time. Best of luck, though!

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc