Category: .NET

Strongly typed app settings with Castle DictionaryAdapter

A while ago (almost 4 years ago, to be precise) Ben Hall wrote a blogpost about using Castle DictionaryAdapter to build a simple, strongly typed wrapper aroud application settings.

While extremely simple, and gets the job done, there are a few ways it can be improved. With that, this blogpost can be treated as an introduction to Castle DictionaryAdapter (which sadly has precisely zero documentation).

While the apprpach is really simple, we’re going to take a detailed look and take it slowly, which is why this post is fairly long. Buckle up.

What is DictionaryAdapter

In a nutshell DictionaryAdapter is a simple tool to provide strongly typed wrappers around IDictionary<string , object>

Here’s a simple example:

// we have a dictionary...
var dictionary = new Dictionary<string, object>
{
	{ "Name", "Stefan" },
	{ "Age", 30 }
};

// ...and an adapter factory
factory = new DictionaryAdapterFactory();

// we build the adapter for the dictionary
var adapter = factory.GetAdapter<IPerson>(dictionary);

Debug.Assert(adapter.Name == "Stefan");

Wrapping app settings

While the project is called DictionaryAdapter in fact it can do a bit more than just wrap dictionaries (as in IDictionary). It can be used to wrap XmlNodes or NameValueCollections like Configuration.AppSettings and this last scenario, is what we’re going to concentrate on.

The goals

We’ve got a few goals in mind for that project:

  • strongly typed – we don’t want our settings to be all just strings
  • grouped – settings that go together, should come together (think username and password should be part of a single object)
  • partitioned – settings that are separate should be separate (think SQL database connection string and Azure service bus url)
  • fail fast – when a required setting is missing we want to know ASAP, not much later, deep in the codebase when we first try to use it
  • simple to use – we want the junior developer who joins the team next week to be able to use it properly.

With that in mind, let’s get to it.

Getting started

DictionaryAdapter lives in Castle.Core (just like DynamicProxy), so to get started we need to get the package from Nuget:

Install-Package Castle.Core

Once this is done and you’re not using Resharper make sure to add using Castle.Components.DictionaryAdapter; to make the right types available.

The config file

Here’s our config file, we’ll be dealing with. Notice it follows a simple naming convention with prefixes to make it easy to group common settings together. This is based on a real project.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <appSettings>
    <add key="environment-type" value="Local" />
    <add key="environment-log-minimum-level" value="Debug" />
    <add key="auth0-client-id" value="abc123abc" />
    <add key="auth0-client-secret" value="123abc123abc" />
    <add key="auth0-domain" value="abc123.auth0.com" />
    <add key="auth0-database-connection-name" value="ABC123" />
    <add key="auth0-token-expiration-seconds" value="3600" />
  </appSettings>
</configuration>

As you can see we have two sets of settings here. One for general environment configuration, and another one for integration with Auth0.

Config interfaces

Now let’s proceed to creating our interfaces, exposing the configuration values to our application. We follow a naming convention where the name of config value corresponds to type/property on the interface. This makes it trivial to see which config value maps to which property on which interface.

For clarity I’d also recommend putting the interfaces in a designated namespace, like MyApp.Configuration.

public interface IEnvironment
{
    EnvironmentType Type { get; }
    LogEventLevel LogMinimumLevel { get; }
}

public interface IAuth0
{
    string ClientId { get; }
    string ClientSecret { get; }
    string Domain { get; }
    string DatabaseConnectionName { get; }
    int TokenExpirationSeconds { get; }
}

Having the interfaces and the config we can start writing the code to put the two together

Bare minimum

Let’s rewrite the code from the beginning of the post to use our config file and interfaces. If you’re not using Resharper you’ll have to manually add a reference to System.Configuration.dll for the following to work.

var factory = new DictionaryAdapterFactory();

var environment = factory.GetAdapter<IEnvironment>(ConfigurationManager.AppSettings);

Debug.Assert(environment.Type == EnvironmentType.Local);

If we run it now, we’ll get a failure. DictionaryAdapter doesn’t know about our naming convention, so we have to teach it how to map the type/property to a config value.

Implementing DictionaryBehaviorAttribute

There are two ways to customise how DictionaryAdapter operates.

  • Transparent, using an overload to GetAdapter and passing a PropertyDescriptor with customisations there.
  • Declarative, using attributes on the adapter interfaces to specify the desired behaviour. If you’ve used Action Filters in ASP.NET MVC, it will feel familiar.

In this example we’re going to use the latter. To begin, we need to create a new attribute, inheriting from DictionaryBehaviorAttribute, and apply it to our two interfaces

public class AppSettingWrapperAttribute : DictionaryBehaviorAttribute, IDictionaryKeyBuilder
{
    private readonly Regex converter = new Regex("([A-Z])", RegexOptions.Compiled);

    public string GetKey(IDictionaryAdapter dictionaryAdapter, string key, PropertyDescriptor property)
    {
        var name = dictionaryAdapter.Meta.Type.Name.Remove(0, 1) + key;
        var adjustedKey = converter.Replace(name, "-$1").Trim('-');
        return adjustedKey;
    }
}

We create the attribute and implement IDictionaryKeyBuilder interface, which tells DictionaryAdapter we want to have a say at mapping the property to a proper key in the dictionary. Using a trivial regular expression we map NameLikeThis to Name-Like-This.

Having done that, if we run the application again, the assertion will pass.

Fail Fast

If we remove the environment-log-minimum-level setting from the config file, and run the app again, it will still pass just fine. In fact, since LogEventLevel (which comes from Serilog logging framework) in an enum, therefore a value type, if we read the property everything will seem to have worked just fine. A default value for the enum will be returned.

This is not the behaviour that we want. We want to fail fast, that is if the value is not present or not valid, we want to know. To do it, we need two modifications to our AppSettingWrapperAttribute

Eager fetching

First we want the adapter to eagerly fetch values for every property on the interface. That way if something is wrong, we’ll know, even before we try to access the property in our code.

To do that, we implement one more interface – IPropertyDescriptorInitializer. It comes with a single method, which we implement as follows:

public void Initialize(PropertyDescriptor propertyDescriptor, object[] behaviors)
{
    propertyDescriptor.Fetch = true;
}

Validating missing properties

To ensure that each setting has a value, we need to insert some code into the process of reading the values from the dictionary. To do that, there is another interface we need to implement: IDictionaryPropertyGetter. This will allow us to inspect the value that has been read, and throw a helpful exception if there is no value.

public object GetPropertyValue(IDictionaryAdapter dictionaryAdapter, string key, object storedValue, PropertyDescriptor property, bool ifExists)
{
    if (storedValue != null) return storedValue;
    throw new InvalidOperationException(string.Format("App setting \"{0}\" not found!", key.ToLowerInvariant()));
}

If we run the app now, we’ll see that it fails, as we expect, telling us that we forgot to set environment-log-minimum-level in our config file.

Wrapping up

That’s it. Single attribute and a minimal amount of bootstrap code is all that’s needed to get nice, simple to use wrappers around application settings.

Additional bonus is, that this is trivial to integrate with your favourite IoC container.

All code is available in this gist.

On strongly typed application settings with Castle DictionaryAdapter

Every non-trivial .NET application ends up using configuration file for its settings. It’s the standard mechanism that’s fairly well adopted across the community. That doesn’t mean however, that it’s simple to deal with.

There have been many various approaches to dealing with the problem, including some from Microsoft. There are a few open source ones, including one from my colleague, Andrew.

Yet another (not even original) approach

This brings us to Castle DictionaryAdapter. Part of the Castle project that never got much traction, partially to poor (read non-existent) documentation. Somewhat similar to DynamicProxy, DictionaryAdapter concentrates on dynamically generating strongly typed wrappers around dictionaries or XML. The idea of using it for wrapping configuration values is not new. Ben blogged about it a few years ago, and I always liked the simplicity and readability of the approach, and how little effort it required.

I started working on a project recently, that has a whole bunch of different sets of config values, which led me to adding some small improvements to the approach.

Not just one big Config God Object

One common mistake (regardless of the approach) is that all the unrelated settings end up in a single massive configuration God Object. There is nothing inherent about the DictionaryAdapter approach forcing you to go down that path. You can split your configuration logically across a few configuration interfaces

public interface SmtpConfiguration
{
    string Name { get; set; }
    int Port { get; set; }
}

public interface SomeOtherConfig
{
    //stuff
}

// and in the config file:
<appSettings>
    <add key="name" value="value" />
    <add key="port" value="25"/>
    <!--  stuff -->
</appSettings>

Taking it a bit further, we might explicitly partition the values using prefixes:

<appSettings>
    <add key="smtp:name" value="value" />
    <add key="smtp:port" value="25"/>
    <add key="stuff:something" value="bla"/>
    <!--  other stuff -->
</appSettings>

DictionaryAdapter knows how to properly resolve prefixed values using KeyPrefixAttribute.

[KeyPrefix("smtp:")]
public interface SmtpConfiguration
{
    string Name { get; set; }
    int Port { get; set; }
}

Fail fast

One other missing big, is shortening the feedback loop. We don’t want to learn we have an invalid or missing value at some later point in the application’s lifecycle when we try to read it for the first time. We want to know about it as soon as possible. Preferably, when the application starts up (and in a test).

The first problem, we could solve with FetchAttribute Which forces a property to be read when the adapter is constructed, therefore forcing exception in cases where, for example, your property is of type TimeSpan but your config value is not a valid representation of time span.

To solve the other problem we need a little bit of code. In fact, to simplify things, we can merge that with what KeyPrefixAttribute and FetchAttribute provide, to have all the functionality we need in a single type.

public class AppSettingsAttribute : KeyPrefixAttribute, IDictionaryPropertyGetter, IPropertyDescriptorInitializer
{
    public AppSettingsAttribute(string keyPrefix) : base(keyPrefix)
    {
    }

    public object GetPropertyValue(IDictionaryAdapter dictionaryAdapter, string key, object storedValue,
        PropertyDescriptor property, bool ifExists)
    {
        if (storedValue == null && IsRequired(ifExists))
        {
            throw new ArgumentException("No valid value for '" + key + "' found");
        }
        return storedValue;
    }

    public void Initialize(PropertyDescriptor propertyDescriptor, object[] behaviors)
    {
        propertyDescriptor.Fetch = true;
    }

    private static bool IsRequired(bool ifExists)
    {
        return ifExists == false;
    }
}

Now our configuration interface changes to:

[AppSettings("smtp:")]
public interface SmtpConfiguration
{
    string Name { get; set; }
    int Port { get; set; }
}

Not only do we get a nice, readable, testable strongly typed access to our settings, that can easily be put in an IoC container. We also are going to get an early exception if we put an invalid value in the config, or we forget about doing it at all.

The full code is on github.

On Castle Windsor and open generic component arity

In the previous post I said there’s one more new feature in Windsor 3.2 related to open generic components.

Take the following class for example:

public class Foo<T, T2> : IFoo<T>
{
}

Notice it has arity of 2 (two generic parameters, T and T2) and the interface it implements has arity of 1.
If we have a generic component for this class what should be supplied for T2 when we want to use it as IFoo<Bar>?

By default, if we just register the component and then try to use it we’ll be greeted with an exception like the following:

Requested type GenericsAndWindsor.IFoo`1[GenericsAndWindsor.Bar] has 1 generic parameter(s), whereas component implementation type GenericsAndWindsor.Foo`2[T,T2] requires 2.
This means that Windsor does not have enough information to properly create that component for you.
You can instruct Windsor which types it should use to close this generic component by supplying an implementation of IGenericImplementationMatchingStrategy.
Please consut the documentation for examples of how to do that.

Specifying implementation generic arguments: IGenericImplementationMatchingStrategy

The IGenericImplementationMatchingStrategy interface allows you to plug in your own logic telling Windsor how to close the implementation type for a given requested service. The following trivial implementation simply uses string for the other argument, therefore allowing the component to be successfully constructed.

public class UseStringGenericStrategy : IGenericImplementationMatchingStrategy
{
	public Type[] GetGenericArguments(ComponentModel model, CreationContext context)
	{
		return new[]
		{
			context.RequestedType.GetGenericArguments().Single(),
			typeof (string)
		};
	}
}

The contract is quite simple, given a ComponentModel and CreationContext (which will tell you what the requested closed type is) you return the right types to use for generic arguments when closing the implementation type of the model.

You hook it up in exactly the same way as IGenericServiceStrategy (and yes, there’s an overload that allows you to specify both).

container.Register(Component.For(typeof (IFoo<>))
	.ImplementedBy(typeof (Foo<,>), new UseStringGenericStrategy())
	.LifestyleTransient());

Now the service will resolve successfully.
Generic component resolved

On Nuget, Git and unignoring packages

Git has a helpful ability to ignore certain files. When working on .net projects, you normally want to ignore your .suo file, bin and obj folders etc.

If you’re using the excellent GitExtensions it will even provide a default, reasonable .gitignore file for you.

Problem is, that while you want to ignore certain patterns in most of your project, generally you want none of those rules to apply to your nuget packages.

Everything in packages folder should be committed. Always. No exceptions.

To do that, you need to leverage a child .gitignore file.

Create a .gitignore file inside your packages folder and add the following line to it:

!*

This tells Git to disregard any ignore rules of the parent file. Now all your packages will be committed completely, without missing any of their functionality.

To constructor or to property dependency?

Us, developers, are a bit like that comic strip (from always great xkcd):

Crazy Straws

We can endlessly debate over tabs versus spaces (don’t even get me started), whether to use optional semicolon or not, and other seemingly irrelevant topics. We can have heated, informed debates with a lot of merit, or (much more often) not very constructive exchanges of opinions.

I mention that to explicitly point out, while this post might be perceived as one of such discussions, the goal of it is not to be a flamewar bait, but to share some experience.

So having said that, let’s cut to the chase.

Dependencies, mechanics and semantics

Writing software in an object oriented language, like C#, following established practices (SOLID etc) we tend to end up with many types, concentrated on doing one thing, and delegating the rest to others.

Let’s take an artificial example of a class that is supposed to handle scenario where a user moved and we need to update their address.

public class UserService: IUserService
{
   // some other members

   public void UserMoved(UserMovedCommand command)
   {
      var user = session.Get<User>(command.UserId);

      logger.Info("Updating address for user {0} from {1} to {2}", user.Id, user.Address, command.Address);

      user.UpdateAddress(command.Address);

      bus.Publish(new UserAddressUpdated(user.Id, user.Address));
   }
}

There are four lines of code in this method, and three different dependencies are in use: session, logger and bus. As an author of this code, you have a few options of supplying those dependencies, two of which that we’re going to concentrate on (and by far the most popular) are constructor dependencies and property dependencies.

Traditional school of thought

Traditional approach to that problem among C# developers goes something like this:

Use constructor for required dependencies and properties for optional dependencies.

This option is by far the most popular and I used to follow it myself for a long time. Recently however, at first unconsciously, I moved away from it.

While in theory it’s a neat, arbitrary rule that’s easy to follow, in practice I found it is based on a flawed premise. The premise is this:

By looking at the class’ constructor and properties you will be able to easily see the minimal set of required dependencies (those that go into the constructor) and optional set that can be supplied, but the class doesn’t require them (those are exposed as properties).

Following the rule we might build our class like that:

public class UserService: IUserService
{
   // some other members

   public UserService(ISession session, IBus bus)
   {
      //the obvious
   }

   public ILogger Logger {get; set;}
}

This assumes that session and bus are required and logger is not (usually it would be initialised to some sort of NullLogger).

In practice I noticed a few things that make usefulness of this rule questionable:

  • It ignores constructor overloads. If I have another constructor that takes just session does it mean bus is an optional or mandatory dependency? Even without overloading, in C# 4 or later, I can default my constructor parameters to null. Does it make them required or mandatory?
  • It ignores the fact that in reality I very rarely, if ever, have really optional dependencies. Notice the first code sample assumes all of its dependencies, including logger, are not null. If it was truly optional, I should probably protect myself from NullReferenceExceptions, and in the process completely destroy readability of the method, allowing it to grow in size for no real benefit.
  • It ignores the fact that I will almost never construct instances of those classes myself, delegating this task to my IoC container. Most mature IoC containers are able to handle constructor overloads and defaulted constructor parameters, as well as making properties required, rendering the argument about required versus optional moot.

Another, more practical rule, is this:

Use constructor for not-changing dependencies and properties for ones that can change during object’s lifetime.

In other words, session and bus would end up being private readonly fields – the C# compiler would enforce that once we set them (optionally validating them first) the fields in the constructor, we are guaranteed to be dealing with the same (correct, not null) objects ever after. On the other hand, Logger is up in the air, since technically at any point in time someone can swap it for a different instance, or set it to null. Therefore, what usually logically follows from there, is that property dependencies should be avoided and everything should go through the constructor.

I used to be the follower of this rule until quite recently, but then it does have its flaws as well.

  • It leads to some nasty code in subclasses where the base class has dependencies. One example I saw recently was a WPF view model base class with dependency on dispatcher. Because of that every single (and there were many) view model inheriting from it, needed to have a constructor declared that takes dispatcher and passes it up to the base constructor. Now imagine what happens when you find you need event aggregator in every view model. You will need to alter every single view model class you have to add that, and that’s a refactoring ReSharper will not aid you with.
  • It trusts the compiler more than developers. Just because a setter is public, doesn’t mean the developers will write code setting it to some random values all over the place. It is all a matter of conventions used in your particular team. On the team I’m currently working with the rule that everybody knows and follows is we do not use properties, even settable ones, to reassign dependencies, we’re using methods for that. Therefore, based on the assumption that developers can be trusted, when reading the code and seeing a property I know it won’t be used to change state, so readability and immediate understanding of the code does not suffer.

In reality, I don’t actually think the current one, or few of my last projects had even a requirement to swap service dependencies. Generally you will direct your dependency graphs from more to less volatile objects (that is object will depend on objects with equal or longer lifespan). In few cases where that’s not the case the dependencies would be pulled from a factory, and used only within the scope of a single method, therefore not being available to the outside world. The only scenario where a long-lived dependency would be swapped that I can think of, is some sort of failover or circuit-breaker, but then again, that would be likely dealt with internally, inside the component.

So, looking back at the code I tend to write, all dependencies tend to be mandatory, and all of them tend to not change after the object has been created.

What then?

This robs the aforementioned approaches from their advertised benefits. As such I tend to draw the division along different lines. It does work for me quite well so far, and in my mind, offers a few benefits. The rule I follow these days can be explained as follows:

Use constructor for application-level dependencies and properties for infrastructure or integration dependencies.

In other words, the dependencies that are part of the essence of what the type is doing go into the constructor, and dependencies that are part of the “background” go via properties.

Using our example from before, with this approach the session would come via constructor. It is part of the essence of what the class is doing, being used to obtain the very user whose address information we want to update. Logging and publishing the information onto the bus are somewhat less essential to the task, being more of an infrastructure concerns, therefore bus and logger would be exposed as properties.

  • This approach clearly puts distinction between what’s essential, business logic, and what is bookkeeping. Properties and fields have different naming conventions, and (especially with ReSharper’s ‘Color identifiers’ option) with my code colouring scheme have significantly different colours. This makes is much easier to find what really is important in code.
  • Given the infrastructure level dependencies tend to be pushed up to base classes (like in the ViewModel example with Dispatcher and EventAggregator) the inheritors end up being leaner and more readable.

In closing

So there you have it. It may be less pure, but trades purity for readability and reduces amount of boilerplate code, and that’s a tradeoff I am happy to make.

IoC concepts: Service

As part of preparing for release of Windsor 3.1 I decided to revisit parts of Windsor’s documentation and try to make it more approachable to some completely new to IoC. This and few following posts are excerpts from that new documentation. As such I would appreciate any feedback, especially around how clearly the concepts in question are explained for someone who had no prior exposure to them.

As every technology, Windsor has certain basic concepts that you need to understand in order to be able to properly use it. Fear not – they may have scary and complicated names and abstract definitions but they are quite simple to grasp.

Service

First concept that you’ll see over and over in the documentation and in Windsor’s API is service. Actual definition goes somewhat like this: “service is an abstract contract describing some cohesive unit of functionality”.

Service in Windsor and WCF service
The term service is extremely overloaded and has become even more so in recent years. Services as used in this documentation are a broader term than for example WCF services.

Now in plain language, let’s imagine you enter a coffee shop you’ve never been to. You talk to the barista, order your coffee, pay, wait and enjoy your cup of perfect Cappuccino. Now, let’s look at the interactions you went through:

  • specify the coffee you want
  • pay
  • get the coffee

They are the same for pretty much every coffee shop on the planet. They are the coffee shop service. Does it start making a bit more sense now? The coffee shop has clearly defined, cohesive functionality it exposes – it makes coffee. The contract is pretty abstract and high level. It doesn’t concern itself with “implementation details”; what sort of coffee-machine and beans does the coffee shop have, how big it is, and what’s the name of the barista, and color of her shirt. You, as a user don’t care about those things, you only care about getting your cappuccino, so all the things that don’t directly impact you getting your coffee do not belong as part of the service.

Hopefully you’re getting a better picture of what it’s all about, and what makes a good service. Now back in .NET land we might define a coffee shop as an interface (since interfaces are by definition abstract and have no implementation details you’ll often find that your services will be defined as interfaces).

public interface ICoffeeShop
{
   Coffee GetCoffee(CoffeeRequest request);
}

The actual details obviously can vary, but it has all the important aspects. The service defined by our ICoffeeShop is high level. It defines all the aspects required to successfully order a coffee, and yet it doesn’t leak any details on who, how or where prepares the coffee.

If coffee is not your thing, you can find examples of good contracts in many areas of your codebase. IController in ASP.NET MVC, which defines all the details required by ASP.NET MVC framework to successfully plug your controller into its processing pipeline, yet gives you all the flexibility you need to implement the controller, whether you’re building a social networking site, or e-commerce application.

If that’s all clear and simple now, let’s move to the next important concept (in the next post).

Using ConventionTests

About conventions

I’m a big fan of using conventions when developing applications. I blogged about it in the past (here, here and later here). I also gave a talk about my experience with this approach and how I currently use it at NDC last week (slides are available here, video is here).

 

One problem I faced when trying to build convention validation tests was lack of simple API that would allow me to build the validation rules for my tests. I built a spike of such library (called Norman) last year. It was meant to be full-featured generic convention-validation engine, built on top of Mono.Cecil that would be able to validate rules inside method bodies (think rules like “no ViewModel can call data access code”). I quickly abandoned it, not being able to come up with simple and concise API that at the same time would be generic enough to cover every possible scenario.

 

About ConventionTests

Having failed with Norman’s approach on a later project I approached the problem from a completely different angle. This is based on observation regarding conventions I tend to use – they, in majority of cases, tend to be based on information about types, and internal models of frameworks I tend to use. In other words – information that can be easily obtained via plain Reflection.

Also limiting myself to very few scenarios allowed me to come up with a focused, simple and concise API. That’s how ConventionTests project was born.

 

Using ConventionTests

ConventionTests is a simple code-only Nuget that provides very minimalistic and limited API enforcing certain structure when writing convention tests and integrating with NUnit (more on that below). Installing it will add two .cs files to the project and a few dependencies (NUnit, Windsor and ApprovalTests).

image

ConventionTests.NUnit file is where all the relevant code is located and __Run file is the file that runs your tests (why that is, later).

The approach is to create a file per convention and name them in a descriptive manner, so that you can learn what the conventions you have in the project by just looking at the files in your Conventions folder, without having to open them.

 

Each convention test inherits (directly or indirectly from IConventionTest) interface. There’s an abstract implementation of the interface, ConventionTestBase and a few specialized implementations for common scenarios provided out of the box: Type-based one (ConventionTest) and two for Windsor (WindsorConventionTest, non-generic and generic for diagnostics-based tests).

 

Type-based convention tests

The most common and most generic group of conventions are ones based around types and type information. Conventions like “every controller’s name ends with ‘Controller’”, or “Every method on WCF service contracts must have OperationContractAttribute” are examples of such conventions.

You write them by creating a class inheriting ConventionTest, which forces you to override one method. Here’s a minimal example:

public class Controllers_have_Controller_suffix_in_type_name : ConventionTest
{
    protected override ConventionData SetUp()
    {
        return new ConventionData
            {
                Types = t => t.IsConcrete<IController>(),
                Must = t => t.Name.EndsWith("Controller")
            };
    }
}

Hopefully the code is self-explanatory (it was designed to be). You specify two predicates. One to find types that your convention is concerning, and the other specifying what your convention demands. If you now run tests in your project (or in __Run.cs file) You will see the result similar to this:

image

In this case I had a single class implementing IController, named Foo, which obviously doesn’t follow the convention that’s why the test failed, giving me generic failure message which just lists the types that failed the test.

To make it more readable I may want to make the test failure message more descriptive and actionable so that whoever sees a failing test like this will be able to know why their code is wrong and how to fix it.

protected override ConventionData SetUp()
{
    return new ConventionData
        {
            Types = t => t.IsConcrete<IController>(),
            Must = t => t.Name.EndsWith("Controller"),
            FailDescription = "Routing will not work unless controllers are named {something}Controller",
            FailItemDescription = t => t.ToString() + ". Name should be " + t.Name + "Controller perhaps?"
        };
}

The two additional properties will cause the failure to look like this:

image

This tells the developer who sees that test not only were the problem is, but also why it is a problem, and suggests a solution.

For scenarios where the convention is not all black and white and there are some legitimate exceptions to the rule there is built in integration with ApprovalTests framework (I blogged about ApprovalTests here). I know that the convention I used here as an example is pretty black and white but assuming it wasn’t changing the code to:

protected override ConventionData SetUp()
{
    return new ConventionData
        {
            Types = t => t.IsConcrete<IController>(),
            Must = t => t.Name.EndsWith("Controller"),
            FailDescription = "Routing will not work unless controllers are named {something}Controller",
            FailItemDescription = t => t.ToString() + ". Name should be " + t.Name + "Controller perhaps?"
        }.WithApprovedExceptions("Let's say Foo has special routing or something");
}

will change it to approval test where approved file will contain the output of the test, so in this case:

image

Windsor-based convention tests

Another common set of convention tests I tend to write are tests regarding my IoC container (and I tend to use Castle Windsor, therefore that’s the one supported out of the box). The structure of the tests and API is similar, with difference being instead of types we’re dealing with Windsor’s component Handlers.

public class List_classes_registered_in_Windsor : WindsorConventionTest
{
    protected override WindsorConventionData SetUp()
    {
        return new WindsorConventionData(new WindsorContainer()
                                                .Install(FromAssembly.Containing<AuditedAction>()))
            {
                FailDescription = "All Windsor components",
                FailItemDescription = h => BuildDetailedHandlerDescription(h)+" | "+
                    h.ComponentModel.GetLifestyleDescription(),
            }.WithApprovedExceptions("We just list all of them.");

    }
}

In this case we’re dealing with slightly different types but we follow the same pattern. We initialize the container and then specify predicates (if necessary) on its handlers. We don’t need to do that in which case all handlers from the container will be considered. This is useful to create (approval) test like this one that just lists all the components in the container, along with the list of services they expose and their lifestyle. With that you get quick insight into what’s in your container by just looking at the approved file. Moreover every change to the list of components will cause the test to fail, drawing your attention, to hopefully validate that the changes happening are indeed what you were expecting.

Windsor diagnostics-based convention tests

Windsor as some built in diagnostics and there’s a base class for tests that take advantage of that. For example the following test will list all potentially misconfigured components.

public class List_misconfigured_components_in_Windsor : 
    WindsorConventionTest<IPotentiallyMisconfiguredComponentsDiagnostic>
{
    protected override WindsorConventionData<IHandler> SetUp()
    {
        return
            new WindsorConventionData<IHandler>(
                new WindsorContainer().Install(FromAssembly.Containing<AuditedAction>()))
                {
                    FailDescription = "The following components come up as potentially unresolvable",
                    FailItemDescription = MisconfiguredComponentDescription
                }.WithApprovedExceptions("it's *potentially* for a reason");
    }
}

In this case instead of inspecting all handlers we go through handlers returned by the diagnostic (there’s another base class, with two generic parameters for diagnostics that do not deal with handlers). We make it an approval test so we can have an approved list of false-positives.

 

Those are the basic scenarios provided out of the box, however you can extend it to easily support more custom scenarios. For example, one case that happened on my current project just this week, was this. We’re using NHibernate and FluentNHibernate (by convention) and the default cascading on collections is set to “cascade all delete orphans”. This is not always the valid choice and for those cases we use mapping overrides.

 

Custom convention tests

We wanted to create a convention test (with approval) that lists all our collections where we do cascade deletes, so that when we add a new collection the test would fail reminding us of the issue, and forcing us to pay attention to how we structure relationships in the application. To do this we could create a base NHibernateConventionTest and NHiibernateConventionData to create similar structure, or just build a simple one-class convention like that:

public class List_collection_that_cascade_deletes:ConventionTestBase
{
    public override void Execute()
    {
        // NH Bootstrapper is our custom class to set up NH
        var bootstrapper = new NHibernateBootstrapper();
        var configuration = bootstrapper.BuildConfiguration();

        var message = new StringBuilder("Collections with cascade delete orphan");
        foreach (var @class in configuration.ClassMappings)
        {
            foreach (var property in @class.PropertyIterator)
            {
                if(property.CascadeStyle.HasOrphanDelete)
                {
                    message.AppendLine(@class.NodeName + "." + property.Name);
                }
            }
        }
        Approve(message.ToString());
    }
}

This is obviously much more imperative and much less readable than the other tests, therefore I suggest in most casese you make the effort and try to make the tests as terse, readable and declarative as possible so that they truly read like a good executable documentation.

 

And if you’re not sure how something works, remember the whole thing is a code-only nuget so you can just read the code.

 

Get it here.

Windsor’s Nuget package and code – your input needed

As we’re nearing to release the next version of Castle Windsor I’m also thinking about how we could leverage Nuget better.

One idea I had was to include elements I (and most people) end up writing in every project using Windsor – a Bootstrapper class which is responsible for setting up the container, a folder for installers, and an installer in it.

This is what a solution would look like after creating an empty project and installing Windsor via Nuget:

Visual Studio after installing Windsor via Nuget, including code

Visual Studio after installing Windsor via Nuget

The goal of this would be to put the code in your solution having the bare-bone useful minimum, with intention you’d customize it as needed. Most people end up adding at least and so the bootstrapper would look like this:

using Castle.Facilities.TypedFactory;
using Castle.MicroKernel.Resolvers.SpecializedResolvers;
using Castle.Windsor;
using Castle.Windsor.Installer;

namespace WpfApplication1
{
    public class WindsorBootstrapper
    {
        public IWindsorContainer BuildContainer()
        {
            IWindsorContainer container = Instantiate();
            Configure(container);
            InstallComponents(container);
            return container;
        }

        private IWindsorContainer Instantiate()
        {
            return new WindsorContainer();
        }

        private void Configure(IWindsorContainer container)
        {
            container.AddFacility<TypedFactoryFacility>();

            var resolver = new CollectionResolver(container.Kernel, allowEmptyCollections: false);
            container.Kernel.Resolver.AddSubResolver(resolver);
        }

        private void InstallComponents(IWindsorContainer container)
        {
            container.Install(FromAssembly.This());
        }
    }
}

The installer is a tougher nut to crack. Because they are so specific to your application it’s impossible to create a generic one that will be useful. I’m thinking either about shooting for the widest scenario, and create either an empty one, with generic name like RepositoriesInstaller, or to get it one step farther and add some real-like code, but comment it out, like this:

using Castle.MicroKernel.Registration;
using Castle.MicroKernel.SubSystems.Configuration;
using Castle.Windsor;

namespace WpfApplication1.Installers
{
    public class RepositoriesInstaller : IWindsorInstaller
    {
        public void Install(IWindsorContainer container, IConfigurationStore store)
        {
            // Customize to the needs of your application

            //container.Register(Classes.FromThisAssembly()
            //                       .BasedOn(typeof (IRepository<>))
            //                       .WithService.Base()
            //                       .LifestyleTransient());
        }
    }
}

My hope is, this would give advanced users a jump-start and provide a gentle introduction for people new to Windsor, and push them towards established practices when using Windsor (closer to to the pit of success).

This is just an idea, and I would like to hear your feedback. What do you think about it? Do you think it should be included at all in the Castle.Windsor package, or should we create a separate package called something like Castle.Windsor-wc (for “with code”) for this?

Testing framework is not just for writing… tests

Quick question – from the top of your head, without running the code, what is the result of:

var foo = -00.053200000m; 
var result = foo.ToString("##.##");

Or a different one:

var foo = "foo"; 
var bar = "bar"; 
var foobar = "foo" + "bar"; 
var concaternated = new StringBuilder(foo).Append(bar).ToString(); 

var result1 = AreEqual(foobar, concaternated); 
var result2 = Equals(foobar, concaternated);


public static bool AreEqual(object one, object two) 
{ 
    return one == two; 
}

How about this one from NHibernate?

var parent = session.Get<Parent>(1); 

DoSomething(parent.Child.Id); 

var result = NHibernateUtil.IsInitialized(parent.Child);

The point being?

Well, if you can answer all of the above without running the code, we’re hiring. I don’t, and I suspect most people don’t either. That’s fine. Question is – what are you going to do about it? What do you do when some 3rd party library, or part of standard library exhibits unexpected behaviour? How do you go about learning if what you think should happen, is really what does happen?

Scratchpad

I’ve seen people open up Visual Studio, create ConsoleApplication38, write some code using the API in question including plenty of Console.WriteLine along the way (curse whoever decided Client Profile should be the default for Console applications, switch to full .NET profile) compile, run and discard the code. And then repeat the process with ConsoleApplication39 next time.

 

The solution I’m using feels a bit more lightweight, and has worked for me well over the years. It is very simple – I leverage my existing test framework and test runner. I create an empty test fixture called Scratchpad.

scratchpad

scratchpad_fixture

This class gets committed to the VCS repository. That way every member of the team gets their own scratchpad to play with and validate their theories, ideas and assumptions. However, as the name implies, this all is a one-off throwaway code. After all, you don’t really need to test the BCL. One would hope Microsoft already did a good job at that.

If you’re using git, you can easily tell it not to track changes to the file, by running the following command (after you commit the file):

git update-index –assume-unchanged Scratchpad.cs

scratchpad_git

With this simple set up you will have quick place to validate your assumptions (and answer questions about API behaviour) with little friction.

scratchpad_test

So there you have it, a new, useful technique in your toolbelt.

Tests and issue trackers – how to manage the integration

In highly disciplined teams when a bug is discovered the following happens:

  • test (or most likely a set of tests) is written that fails exposing the bug
  • a ticket in issue tracking system is created
  • a developer fixes the bug, runs all the tests including the new ones and if everything is green, the ticket is resolved

My question is what if some time later (say several weeks after) a developer wants to find which issue relates to the tests he’s looking at, or which tests were documenting the bug he’s looking at. How do you manage the link between the tests and issues in your tracker?

Solution one – file per ticket

Quite common solution to this problem is to have a special folder in your test project and for each ticket have a file named after the ticket’s ID like this:

Tests solution

That works and is quite easy to discover. However the downside of this solution is that it introduces fragmentation. If a bug was found in a shipping module does it really make sense to keep some tests for shipping module in ShippingModuleTests class and some other in CRM_4 class merely because the latter ones were discovered by a tester and not original developer?

Solution two – ticket id in method name

To alleviate that another solution is often used. The tests end up in ShippingModuleTests bug the id of the issue is encoded in the name of the test method. like this:

[Test]
public void CRM_4_Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test]
public void CRM_4_Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

That’s a step in the right direction. It makes the link explicit and you can quickly navigate the relation either direction. However I don’t like it very much, because most of the time I couldn’t care less about the fact that this test documents a long fixed bug. Yet I am constantly reminded about it every time I run my tests.

tests

Solution three – description

The solution I found myself using the most recently is to leverage the description most testing frameworks let you associate with your tests.

[Test(Description = "CRM_4")]
public void Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test(Description = "CRM_4")]
public void Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

This still makes the association explicit and searchable, but doesn’t remind me of it constantly where I don’t care.

tests

What about you? What approach do you employ to manage that?