Updates from February, 2012 Toggle Comment Threads | Keyboard Shortcuts

  • danielsaidi 7:49 pm on February 29, 2012 Permalink | Reply
    Tags: gloss, icon, remove, shine   

    Remove gloss effect for iOS app icon 

    There is a really easy alternativ to this approach. The property list approach still work, but if you select the project root (the topmost one with the blue icon in the project navigator) and select the “Summary” tab, the iPhone and iPad icons can be set. Check the “Prerendered” box to disable the automatic glow effect.

    After a long time away, I started looking at iOS development once again…great fun.

    However, less fun and really unintuitive is how you remove the “gloss” effect for your application icon.

    This is how you do it:

    • Right-click your application Info.plist file.
    • Select “Open As / Source Code”
    • Add the following two lines anywhere (I add them after <key>CFBundleIconFiles</key><array/>):
    <key>UIPrerenderedIcon</key>
    <true>
    

    Voilá – you’re done!

    This must be one of the most hidden features I have ever come across. Does anyone know another way to do this?

     
    • zubii 5:57 am on January 28, 2013 Permalink | Reply

      Help! I did exactly as you did but when I tried to open it as a property list again it said “The data couldn’t be read because it has been corrupted”. It also wouldn’t build to run on the iOS Simulator. Am I doing something wrong?

      • danielsaidi 6:33 am on January 28, 2013 Permalink | Reply

        Oh, now there is actually a really easy way of doing this. The property list approach still work, but if you select the project root (the topmost one with the blue icon in the project navigator) and select the “Summary” tab, the iPhone and iPad icons can be set. Check the “Prerendered” box to disable the automatic glow effect.

        As for the corrupt warning, you probably entered some invalid piece of xml. If you right-click the file and select “Open as > Property List” this will probably not work as well, because the file contains invalid xml. Can you undo your changes? If so, undo and disable the glow effect as I described above. If now, remove everything you manually added so that the file is as it was back when it worked.

        Good luck.

    • zubii 6:55 am on January 28, 2013 Permalink | Reply

      Thanks so much, it worked. Yes, I was able to erase the changes, thanks. Thanks again.

  • danielsaidi 9:37 pm on February 27, 2012 Permalink | Reply
    Tags: , dependency injection, , , inversion of control, sub routines   

    Dependency Injection gone too far? 

    I am currently working with a new version of a hobby console app of mine, that should execute certain actions depending on the input arguments. Now, I wonder if I am taking the concept of dependency injection too far.

    Which dependencies should you inject, and which should you keep as invisible dependencies?

    How does Cloney work?

    The Cloney console application will do different things depending on the input arguments it is provided with. For instance, if I enter

     cloney --clone --source=c:/MyProject --target=d:/MyNewProject

    Cloney will clone the solution according to certain rules.

    To keep the design clean and flexible, I introduced the concept of sub routines. A sub routine is a class that implement the ISubRoutine interface, which means that it can be executed using the input arguments of the console application. For instance, the CloneRoutine listens for the input arguments above, while the HelpRoutine triggers on cloney –help.

    When I start the application, Cloney fetches all ISubRoutine implementation and tries to execute each with the input arguments. Some may trigger, some may not. If no routine triggers, Cloney displays a help message.

    So, what is the problem?

    Well, there is really no problem…just different ways to do things.

    For instance, when it comes to parsing the input arguments and making them convenient to handle, I use a class call CommandLineArgumentParser, which implements ICommandLineArgumentParser. The class transforms the default string array to a dictionary, which makes it easy to map an arg key to an arg value.

    Using the class is a choice each sub routine must take. The interface just defines the following method:

     bool Run(IEnumerable<string> args)

    Yeah, that’s right. Each sub routine just acts like a program of its own. As far as the master program is concerned, it just delegates the raw argument array it receives to each sub routine. How the routine handles the arguments is entirely up to it.

    The old design – DI for all (too much?)

    Previously, the CloneRoutine had two constructors:

       public CloneRoutine()
       : this(Default.Console, Default.Translator, Default.CommandLineArgumentParser, Default.SolutionCloner) { ... } 
    
       public CloneRoutine(IConsole console, ITranslator translator, ICommandLineArgumentParser argumentParser, ISolutionCloner solutionCloner) { ... }

    Since a sub routine is created with reflection, it must provide a default constructor. Here, the default constructor use default implementations of each interface, while the custom constructor is used in the unit tests and supports full dependency injection. Each dependency is exposed and pluggable.

    So, what is the problem?

    Well, I just feel that since the command line arguments are what defines what the routine should do, letting the class behavior be entirely determined by how another component parses the input arguments, makes the class unreliable.

    If I provide the class with an implementation that returns an invalid set of arguments, even if I provide the routine with arguments that it should trigger on (and especially considering that the return value is a do-as-you-please IDictionary), the class may explode due to an invalid implementation.

    It that not bad?

    The new design – DI where I think it’s needed (enough?)

    Instead of the old design, it this not better:

       public CloneRoutine() :this(Default.Console, Default.Translator, Default.SolutionCloner) { ... }
       public CloneRoutine(IConsole console, ITranslator translator, ISolutionCloner solutionCloner) {
          ...
          this.argumentParser = Default.CommandLineArgumentParser;
          ...
       }

    This way, I depend on the choice of ICommandLineArgumentParser implementation that I have made in the Default class, but if that implementation is incorrect, my unit tests will break. The other three injections (IMO) are the ones that should be exchangable. The argument parser should not be.

    Is this good design, or am I doing something terribly bad by embedding a hard dependency, especially since all other component dependencies can be injected. Please provide me with your comments regarding this situation.

     
    • Henrik 12:05 am on February 28, 2012 Permalink | Reply

      I think this looks good! What you can do is either Property Injection (i.e. make argumentParser a public property which can be set by the test code when needed) or the Extract and Override technique (i.e make argumentParser a protected virtual property and then make a testable class that inherits from your “production” class (e.g. CloneRoutineStub : CloneRoutine) and then overrides that virtual property.

      Or am I getting your question wrong?

    • danielsaidi 7:21 am on February 28, 2012 Permalink | Reply

      Thanks for commenting, Henrik! I guess my question was this: when should you allow behavior to be injected and when should you not go down that path.

      For the example above, I think injecting the IConsole, ITranslator and ISolutionCloner implementations is fine, since they define responsibility that SHOULD be delegated by the class.

      However, I think that acting on the input arguments received in the Run method should be the responsibility of the class, and should not be injectable.

      If the routine chooses a certain component to parse arguments is absolutely fine (and I kind of have an DI model since the routine uses the Default.CommandLineArgumentParser), but it should not be exposed.

      If I allow the argument parsing behavior to be injectable, I can make the class stop working in really strange ways, since the parser has to parse the arguments in a very specific way. IMO, the argument parser differs from the other three components.

      So….do you agree? 🙂

    • Henrik 8:18 am on February 28, 2012 Permalink | Reply

      I agree! I think it’s perfectly okay! Context is king!
      It’s not a self-purpose to require dependencies to be injected.

      Maybe you want me to disagree, so we get this lovely war feeling? 🙂

      • danielsaidi 8:41 am on February 28, 2012 Permalink | Reply

        I do love the war feeling, but sometimes, getting along is a wonderful thing as well. 🙂

        System design sure is tricky sometimes. I really appreciate having you and other devs to discuss these things with.

    • Daniel Lee 9:42 am on February 28, 2012 Permalink | Reply

      I can’t see why you would ever need to switch out the command line argument parser. And as you say yourself, it feels more like core logic than a dependency.

      So you made the right decision.

      (Although, in such a small codebase as Cloney, I don’t know if this really matters all that much?)

    • danielsaidi 10:01 am on February 28, 2012 Permalink | Reply

      Nice, thanks Daniel 🙂

      I think I’ll roll with this design for now, and replace it whenever I see the need to. I do not like all these hard dependencies to the Default class – it’s like making the classes dependent on the existence of StructureMap.

      However, as you say, it IS core logic. Cloney is not a general library, so these kinds of dependencies may not be as bad as if I’d have the same design in, say, a general lib like .NExtra.

    • Johan Driessen 12:39 pm on February 28, 2012 Permalink | Reply

      If your main concern is that your unit tests will be more fragile, and break if your implementation of the argument parser is incorrect (or just changes), why don’t you just make the argumentParser-field protected virtual?

      Then you can just create a “TestableCloneRoutine” in your tests, that inherits from CloneRoutine and replaces the argumentparser with a stub, so that your tests don’t become dependant on the actual implementation, while still not making the internals visible to other classes.

      AKA “extract and override”.

      • Johan Driessen 12:41 pm on February 28, 2012 Permalink | Reply

        Actually, you would have to make argumentParser a property (still protected and virtual) and have it return Default.CommandLineArgumentParser in CloneRoutine.

        • danielsaidi 2:39 pm on February 28, 2012 Permalink

          Thanks for your input, Johan. I will try to clarify my main concern regarding the design.

          If we see to the unit tests, I think that the following expression is a good test scenario:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console”

          In my unit tests, for instance, I can then trigger the Run method with various arguments and see that my IConsole mock receives a call to WriteLine only when I provide the method with valid input.

          If, on the other hand, the argument parse behavior is exposed, the HelpRoutine will communicate that it has an IArgumentParser that parses a string[] to an IDictionary. IMO, this is not relevant.

          Furthermore, if I make the parser injectable, my test scenario would rather be expressed like this:

          “If I start Cloney with the argument array [“–help”], the HelpRoutine class should trigger and write to the console if the argument parser it uses parses the array to an IDictionary where the “help” key is set to true.”

          I am not sure which test scenario I prefer. The second one is more honest, since the routine’s behavior IS based on the parser’s behavior…but is that really what I want to test?

          I considered re-adding the IArgumentParser as a constructor parameter again, just to make it possible to inject it, but I am not really sure. I see benefit with this, as I do with keeping it completely internal.

          IMO, the fact that the routine uses an ArgumentParser to parse the arguments should not be of any concern to anyone but the class. It’s the resulting behavior that should matter.

          But I have a split feeling about it all.

  • danielsaidi 1:03 pm on February 22, 2012 Permalink | Reply
    Tags: assembly version, boo, , nextra, , nuget package explorer, phantom,   

    Use Phantom/Boo to automatically build, test, analyze and publish to NuGet and GitHub 

    When developing my NExtra .NET library hobby project, I used to handle the release process manually. Since a release involved executing unit tests, bundling all files, zipping and uploading the bundle to GitHub, creating new git tags etc. the process was quite time-consuming and error-prone.

    But things did not end there. After adding NExtra to NuGet, every release also involved refreshing and publishing six NuGet packages. Since I used the NuGet Package Explorer, I had to refresh the file and dependency specs for each package. It took time, and the error risk was quite high.

    Since releasing new versions involved so many steps, I used to release NExtra quite seldom.

    Laziness was killing it.

    The solution

    I realized that something had to be done. Unlike at work, where we use TeamCity for all solutions, I found a build server to be a bit overkill. However, maybe I could use a build script for automating the build and release process?

    So with this conclusion, I defined what the script must be able to help me out with:

    • Build and test all projects in the solution
    • Automatically extract the resulting version
    • Create a release folder or zip with all files
    • Create a new release tag and push it to GitHub
    • Create a NuGet package for each project and publish to NuGet

    The only piece of the release process not covered by this process was to upload the release zip to GitHub, but that would be a walk in the park once the build script generated a release zip.

    The biggest step was not developing the build script. In fact, it is quite a simple creation. Instead, the biggest step was to come to the conclusion that I needed one.

    Selecting a build system

    In order to handle my release process, I needed a build system. I decided to go with Phantom, since I use it at work as well. It is a convenient tool (although a new, official version would be nice) that works well, but it left me with an annoying problem, which I will describe further down.

    So, I simply added Phantom 0.3 to a sub folder under the solution root. No config is needed – the build.bat and build.boo (read on) files take care of everything.

    The build.bat file

    build.bat is the file that I use to trigger a build, build a .zip or perform a full publish from the command prompt. I placed it in
    the solution root, and it looks like this.

    @echo off
    
    :: Change to the directory that this batch file is in
    for /f %%i in ("%0") do set curpath=%%~dpi
    cd /d %curpath%
    
    :: Fetch input parameters
    set target=%1
    set config=%2
    
    :: Set default target and config if needed
    if "%target%"=="" set target=default
    if "%config%"=="" set config=release
    
    :: Execute the boo script with input params - accessible with env("x")
    resources\phantom\phantom.exe -f:build.boo %target% -a:config=%config%

     

    Those of you who read Joel Abrahamsson’s blog, probably recognize the first part. It will move to the folder that contains the .bat file, so that everything is launched from there.

    The second section fetches any input parameters. The target param determines the operation to launch (build, deploy, zip or publish) and config what kind of build config to use (debug, release etc.)

    The third section handles param fallback in case I did not define some of the input parameters. This means that if I only provide a target, config will fall back to “release”. If I define no params at all, target will fall back to “default”.

    Finally, the bat file calls phantom.exe, using the build.boo file. It tells build.boo to launch the provided “target” and also sends “config” as an environment variable (the -a:config part).

    All in all, the build.bat file is really simple. It sets a target and config and uses the values to trigger the build script.

    The build.boo file

    The build.boo build script file is a lot bigger than the .bat file. It is also located in the solution root and looks like this:

    import System.IO
    
    project_name = "NExtra"
    assembly_file = "SharedAssemblyInfo.cs"
    
    build_folder = "_tmpbuild_/"
    build_version = ""
    build_config = env('config')
    
    test_assemblies = (
     "${project_name}.Tests/bin/${build_config}/${project_name}.Tests.dll",
     "${project_name}.Web.Tests/bin/${build_config}/${project_name}.Web.Tests.dll",
     "${project_name}.Mvc.Tests/bin/${build_config}/${project_name}.Mvc.Tests.dll",
     "${project_name}.WPF.Tests/bin/${build_config}/${project_name}.WPF.Tests.dll",
     "${project_name}.WebForms.Tests/bin/${build_config}/${project_name}.WebForms.Tests.dll",
     "${project_name}.WinForms.Tests/bin/${build_config}/${project_name}.WinForms.Tests.dll",
    )
     
    
    target default, (compile, test):
     pass
    
    target zip, (compile, test, copy):
     zip("${build_folder}", "${project_name}.${build_version}.zip")
     rmdir(build_folder)
    
    target deploy, (compile, test, copy):
     with FileList(build_folder):
     .Include("**/**")
     .ForEach def(file):
     file.CopyToDirectory("{project_name}.${build_version}")
     rmdir(build_folder)
    
    target publish, (zip, publish_nuget, publish_github):
     pass
     
    
    target compile:
     msbuild(file: "${project_name}.sln", configuration: build_config, version: "4")
    
     //Probably a really crappy way to retrieve assembly
     //version, but I cannot use System.Reflection since
     //Phantom is old and if I recompile Phantom it does
     //not work. Also, since Phantom is old, it does not
     //find my plugin that can get new assembly versions.
     content = File.ReadAllText("${assembly_file}")
     start_index = content.IndexOf("AssemblyVersion(") + 17
     content = content.Substring(start_index)
     end_index = content.IndexOf("\"")
     build_version = content.Substring(0, end_index)
    
    target test:
     nunit(assemblies: test_assemblies, enableTeamCity: true, toolPath: "resources/phantom/lib/nunit/nunit-console.exe", teamCityArgs: "v4.0 x86 NUnit-2.5.5")
     exec("del TestResult.xml")
    
    target copy:
     rmdir(build_folder)
     mkdir(build_folder)
    
     File.Copy("README.md", "${build_folder}/README.txt", true)
     File.Copy("Release-notes.md", "${build_folder}/Release-notes.txt", true)
    
     with FileList(""):
     .Include("**/bin/${build_config}/*.dll")
     .Include("**/bin/${build_config}/*.pdb")
     .Include("**/bin/${build_config}/*.xml")
     .Exclude("**/bin/${build_config}/*.Tests.*")
     .Exclude("**/bin/${build_config}/nunit.framework.*")
     .Exclude("**/bin/${build_config}/nsubstitute.*")
     .ForEach def(file):
     File.Copy(file.FullName, "${build_folder}/${file.Name}", true)
    
    target publish_nuget:
     File.Copy("README.md", "Resources\\README.txt", true)
     File.Copy("Release-notes.md", "Resources\\Release-notes.txt", true)
    
     exec("nuget" , "pack ${project_name}\\${project_name}.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.web\\${project_name}.web.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.mvc\\${project_name}.mvc.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.wpf\\${project_name}.wpf.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.webforms\\${project_name}.webforms.csproj -prop configuration=release")
     exec("nuget" , "pack ${project_name}.winforms\\${project_name}.winforms.csproj -prop configuration=release")
    
     exec("nuget push ${project_name}.${build_version}.nupkg")
     exec("nuget push ${project_name}.web.${build_version}.nupkg")
     exec("nuget push ${project_name}.mvc.${build_version}.nupkg")
     exec("nuget push ${project_name}.wpf.${build_version}.nupkg")
     exec("nuget push ${project_name}.webforms.${build_version}.nupkg")
     exec("nuget push ${project_name}.winforms.${build_version}.nupkg")
    
     exec("del *.nupkg")
     exec("del Resources\\README.txt")
     exec("del Resources\\Release-notes.txt")
    
    target publish_github:
     exec("git add .")
     exec('git commit . -m "Publishing ${project_name} ' + "${build_version}" + '"')
     exec("git tag ${build_version}")
     exec("git push origin master")
     exec("git push origin ${build_version}")
    

    Topmost, we see a system import. This will allow us to use System.IO for file operations. After that, I define some variables and a list of test assemblies that I want to test.

    Two variables worth mentioning is the build_version, which is set in the compile step, as well as build_config, which is set by the input parameter defined in build.bat.

    The next section of the file defines all public targets, that are intended to be callable by the user. These map directly to target in build.bat.

    Of course, all targets further down can be called as well – there are no such thing as public or private targets. Still, that would probably not be a very good idea.

    If we look at the public targets, we have:

    • default – Executes “compile” and “test”
    • zip – Executes “compile” and “test”, then creates a zip file
    • deploy – Executes “compile” and “test” then creates a folder
    • publish – Executes “zip”, then publishes to NuGet and GitHub

    If we look at the private targets (that do the real work) we have:

    • compile – Compiles the solution and extract the version number
    • test – Runs the NUnit builtin with the .NExtra test assemblies
    • copy – Copies all relevant files to the temporary build_folder
    • publish_nuget – Pack and publish each .NExtra project to NuGet
    • publish_github – Commit all changes, create a tag then push it

    It is not that complicated, but it is rather much. You could take the bat and boo file and tweak it, and it would probably work for your projects as well.

    However, read on for some hacks that I had to do to get the build process working as smoothly as it does.

    One assembly file to rule them all

    A while ago, I decided to extract common information from each of the .NExtra projects into a shared assembly file.

    The shared assembly file looks like this:

    using System.Reflection;
    
    // General Information about an assembly is controlled through the following
    // set of attributes. Change these attribute values to modify the information
    // associated with an assembly.
    [assembly: AssemblyCompany("Daniel Saidi")]
    [assembly: AssemblyProduct("NExtra")]
    [assembly: AssemblyCopyright("Copyright © Daniel Saidi 2009-2012")]
    [assembly: AssemblyTrademark("")]
    
    // Make it easy to distinguish Debug and Release (i.e. Retail) builds;
    // for example, through the file properties window.
    #if DEBUG
    [assembly: AssemblyConfiguration("Debug")]
    #else
    [assembly: AssemblyConfiguration("Retail")]
    #endif
    
    // Version information for an assembly consists of the following four values:
    //
    // Major Version
    // Minor Version
    // Build Number
    // Revision
    //
    // You can specify all the values or you can default the Build and Revision Numbers
    // by using the '*' as shown below:
    [assembly: AssemblyVersion("2.6.3.4")]
    [assembly: AssemblyFileVersion("2.6.3.4")]

    The file defines shared assembly information like version, to let me specify this once for all projects. I link this file into each project and then remove the information from the project specific assembly info file.

    Since the .NExtra version management is a manual process (the way I want it to be), I manage the .NExtra version here and parse the file during the build process to retrieve the version number. The best way would be to use System.Reflection to analyze the library files, but this does not work, since Phantom uses .NET 3.5.

    I tried re-compiling Phantom to solve this, but then other things started to crash. So…the file parse approach is ugly, but works.

    Tweaking NuGet

    After installing NuGet, typing “nuget” in the command prompt will still cause a warning message to appear, since “nuget” is unknown.

    To solve this, either add the NuGet executable path to PATH or be lazy and use the nuget.exe command line bootstrapper, which finds NuGet for you. You can download it from CodePlex or grab it from the .NExtra Resources root folder.

    Regarding each project’s nuspec file, they were easily created by calling “nuget spec x” where x is the path to the project file. A nuspec file is then generated. I then added some information that cannot be extracted from the assembly, like project URL, icon etc. for each of these generated spec files.

    Conclusion

    This post became a rather long, but I hope that it did explain my way of handling the .NExtra release process.

    Using the build script, I can now call build.bat in the following ways:

    • build – build and test the solution
    • build zip – build and test the solution and generate a nextra.<version>.zip file
    • build deploy – build and test the solution and generate a nextra.<version>.zip folder
    • build publish – the same as build zip, but also publishes to NuGet and GitHub.

    The build script has saved me immense amount of work. It saves me time, increases quality by reducing the amount of manual work and makes releasing new versions of .NExtra a breeze.

    I still have to upload the zip to the GitHub download area, but I find this to be a minimum task compared to all other steps. Maybe I’ll automate this one day as well, but it will do for now.

    I strongly recommend all projects to use a build script, even for small projects where a build server is a bit overkill. Automating the release process is a ticket to heaven.

    Or very close to that.

     
    • Markus Johansson 10:07 am on July 16, 2013 Permalink | Reply

      Great post! Looks as a really nice release script! Thanks for sharing your experiance!

      • danielsaidi 8:33 am on July 20, 2013 Permalink | Reply

        Thanks! 🙂 Once all pieces are in place, publishing new releases is a breeze.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel