Category: Design

New Castle Windsor feature – debugger diagnostics

What you’re seeing here, is a feature in very early stages of development. It’s very likely to change in the very near future, hopefully based on your feedback which I’m looking forward to.

It is often the case with IoC containers, especially when registering components by convention, that you end up with misconfigured components, or with an exception saying that your component can’t be resolved. To aid working in these situations StructureMap provides methods like WhatDoIHave and AssertConfigurationIsValid. That’s the only container I know of that provides this kind of diagnostics.

Windsor also has similar ability to StructureMap’s WhatDoIHave method. It’s very powerful as well, since Windsor tracks internally the entire graph of objects and lets you access it you can for example visualize it.

The AssertConfigurationIsValid is a tougher nut to crack. Thing is – you can’t really say when the configuration is valid in any non trivial situation. You can’t say when it’s non-valid either. The reason for that is that there are multiple dynamic moving parts that you can’t really statically analyse to output a yes/no value.

What Windsor does

To help with these situations when debugging, in Windsor 2.5 I added debugging proxies to the most important classes in Windsor, so that when looking at them in the debugger you will be presented with much more helpful view of what’s in the container and what’s potentially not right.

debugger_visualizer

You will see a very minimalistic view of what’s going on in the container:

  • All components – will give you a list of all existing components in the container, kind of like WhatDoIHave
  • Facilities will give you the list of the installed facilities
  • Potentially misconfigured components – this is a list of components that don’t look to good to Windsor and may have some dependencies missing. It also is a very simplified view. At the first glance it will only tell you the key of the component (in this case fully qualified name), slash service/implementation. In this case both service and implementation are the same, so it won’t repeat the information.
    When you drill down, you will also see the lifestyle of the component and most importantly its status, which will tell you why Windsor thinks there may be something wrong with the component.

debugger_visualizer2

This pretty nicely tells you what might be wrong. If you are sure you providing these values dynamically, be it from the call site or via dynamic parameters, you can move on, otherwise it can remind you of the missing dependencies.

I want your feedback

How do you like this feature? How would you change it? What other information you think would be useful in this view? Leave a comment, or go to uservoice site to share your ideas.

Thanks

How I use Inversion of Control containers – pulling from the container

As I expected my previous post prompted a few questions regarding the Three Container Calls pattern I outlined. Major one of them is how to handle the following scenario:

Container_pulling

We create our container, install all the components in it.  Moments later we resolve the root component, container goes and creates its entire tree of dependencies for us, does all the configuration and other bookkeeping, injects all the dependencies and gives us back the new fresh ready to use objects graph that we then use to start off the application.

More often than not, this will not be enough though. At some point later in the lifecycle of the application, a user comes in and wants something from the application. Lets say the user clicks a button saying “Open a file” and at this point a “OpenFile” view should be loaded along with all of its dependencies and presented to the user. But wait – it gets worse – Because now if a user selects a .txt file a “DisplayText” view should be loaded, but when a .png file gets selected, we should load a “DisplayImage” view.

Generalizing – how to handle situations where you need to request some dependencies unavailable at registration or root resolution time, potentially passing some parameters to the component to be resolved, or some contextual information affecting which component will be resolved (or both).

Factory alive and kicking

The way to tackle this is to use a very old design patterns (from the original GOF patterns book) called Abstract factory and Factory method.

Nitpickers’ corners – as I know I will be called out by some purists that what I’m describing here is not purely this or that pattern according to this or that definition  – I don’t care, so don’t bother. What’s important is the concept.

It gives us exactly what we need – provides us with some components on demand, allowing us to pass some inputs in, and on the same time encapsulating and abstracting how the object is constructed. The encapsulating part means that I totally don’t need to care about how, and where the components get created – it’s completely offloaded to the container. The abstracting part means that I can still adhere to the Matrix Principle – the spoon container does not exist.

How does this work?

Using Castle Windsor, this is very simple – all you have to do is to declare the factory interface, and Windsor itself will provide the implementation. It is very important. Your interface lives in your domain assembly, and knows nothing about Windsor. The implementation is a detail, which depends on the abstraction you provided, not the other way around. This also means that most of the time all of the work you have to do is simply declaring the interface, and leave all the heavy lifting to Windsor, which has some pretty smart defaults that it will use to figure this stuff out by itself. For non-standard cases, like deciding which component to load based on the extension of the file provided, you can easily extend the way factory works by using custom selector. With well thought out naming convention, customizing Windsor to support this scenario is literally one line of meaningful code.

In addition to that (although I prefer the interface-driven approach) Windsor (upcoming version 2.5), and some other containers, like Autofac (which first implemented the concept), and StructureMap (don’t really know about Ninject, perhaps someone can clarify this in the comments) support delegate based factories, where just the fact of having a dependency on Func<IFoo> is enough for the container to figure out that it should inject for that dependency a delegate that when invoked will call back to the container to resolve the IFoo service. The container will even implement the delegate for you, and it also obviously means that the code you have has no idea (nor does it care) that the actual dependency was created by a container.

 

This is the beauty of this approach – it has all the pros and none of the cons of Service Locator, and shift towards supporting it was one of the major changes in the .NET IoC backyard in the last couple of months, so if you are still using Service Locator you’re doing it wrong.

How I use Inversion of Control containers

Quite regularly I get asked by people how they should use IoC container in their application. I don’t think I can answer this question once and universally because every application is different, every application has different needs and every application has different opportunities for leveraging an Inversion of Control container.

However there are some general rules and patterns that I use and I thought I will blog about this instead.

While I use a concrete example of Castle Windsor, the discussion here is universal.It applies to all containers.

Inversion of Control means container does not exist

Container_does_not_exist

Basic difference between an Inversion of Control framework and any other kind of framework is that the control gets inverted. Your application does not call to the framework. Instead the framework is aware of your application and it goes and does its stuff with your application’s objects.

Since Inversion of Control Containers are Inversion of Control frameworks that paradigm applies to them as well. They control objects in your application. They instantiate them, manage their lifecycle, invoke methods on them, modify them, configure them, decorate them, do all sorts of stuff with them. The main point here is – the application is completely unaware of that. I feel tempted to put another Matrix analogy here, but hopefully you get the point without it.

The most visible manifestation of this fact, which clearly illustrates the lack of any knowledge about the container is that I tend not to reference the container in the application at all. The only place where the reference to the container does appear is the root project which only runs the application. It however contains no application logic and serves merely as application entry point and container bootstrapper.

Not referencing the container at all serves a few purposes. Most importantly it helps to enforce good OOP design, and blocks the temptation to take shortcuts and use the container as Service Locator.

Three calls pattern of usage

Now, probably the most interesting part is this simple bootstrapper. How do I interact with the container?

I tend to use pattern of three calls to the container. Yes you heard it right – I only call the container is three places in the entire application* (conditions apply, but I’ll discuss this below in just a moment).

My entire interaction with the container usually looks like this:

var container = BootstrapContainer();

finder = container.Resolve<IDuplicateFinder>();
var processor = container.Resolve<IArgumentsParser>();
Execute( args, processor, finder );

container.Dispose();

The three steps are:

  1. Bootstrap the container
  2. Resolve root components
  3. Dispose the container

Let’s go over them in turn:

One Install to rule them all…

Bootstrapping is incredibly simple:

private IWindsorContainer BootstrapContainer()
{
    return new WindsorContainer()
                  .Install( FromAssembly.This() );
}

The most important rule here (which is important in Windsor, but good to follow with other containers that enable this)  is – to call Install just once and register and configure all the components during this single call. Also important stuff is to use the Install method and Installers to encapsulate and partition registration and configuration logic. Most other containers have that capability as well. Autofac and Ninject call it modules, StructureMap calls it registries.

As you can see on the screenshot above I usually have a dedicated folder called Installers in my bootstrap assembly where I keep all my installers. I tend to partition the installers so that each one of them installs some cohesive and small set of services. So I might have ViewModelsInstaller, Controllers Installer, BackgroundServicesInstaller, LoggingInstaller etc. Each installer is simple and most of them look similar to this:

public class ArgumentInterpretersInstaller : IWindsorInstaller
{
    public void Install(IWindsorContainer container, IConfigurationStore store)
    {
        container.Kernel.Resolver.AddSubResolver(new ArrayResolver(container.Kernel));
        container.Register(
            Component.For<IArgumentsParser>()
                .ImplementedBy<ArgumentsParser>().LifeStyle.Transient,
            Component.For<IParseHelper>().ImplementedBy<ParseHelper>(),
            AllTypes.FromAssemblyContaining<IArgumentInterpreter>()
                 .BasedOn<IArgumentInterpreter>()
                 .WithService.Base());
    }
}

This partitioning helps keep things tidy, and by leveraging Installers you end up writing less code, as now Windsor will autodiscover them and register in just a single call to FromAssembly.This(). Also another noteworthy fact about this, is that you leverage Inversion of Control principle to configure the container itself. Instead of passing the container around, which should always raise a read flag, you’re telling Windsor – configure yourself. Nice and tidy.

Another important thing to notice, is that I tend to leverage convention based registration, rather than registering all the components one by one. This greatly cuts down the size of your registration code. It takes the burden of registering each newly added components manually off of your shoulders. It also enforces consistency in your code, because if you’re not consistent your components won’t get registered.

* yes – I do interact with the container in the installers, so I clearly break the three calls rule, right? No – I interact with the container in objects that extend, or modify the container itself – in this case it’s obviously not a bad thing.

…and in one Resolve bind them

Similar to unit tests principle – one logical assert per tests, I follow the rule of allowing explicit call to resolve in just one place in the entire application. Usually this will be just one call, that pulls the root component (Controller in MVC application, Shell in WPF application, or whatever the root object in your app is). In the example above I have two root objects so I have two calls to Resolve. That’s usually OK. However if you have more than three, you might want to take a closer look at reasons for that, as it’s quite unlikely you really need this.

The important thing is to have the Resolve calls in just this one single place and nowhere else in your application. Why that’s important? To fully leverage container’s potential, instead of telling it at every step what it should do. Let it spread its wings.

Clean up

It is important to let the container clean up after itself, when its done doing its job. In this case I can not only say that this is something I do. You also always should dispose your container at the end of your application. Always, no exceptions. This will let the container to shutdown gracefully, decommission all the components, give them chance to clean up after themselves, and free all the resources they may occupy.

 

What about you? How do you use your container?

Castle Windsor and child containers: Revolutions

Continuing the topic from the previous posts.

What would happen?

Current behavior of Windsor is somewhat flawed. What it will do is it will resolve foo, and provide it with bar. The flaw of this behavior is that now when we resolve foo via any of the tree containers we’ll  get the same instance (since it’s a singleton). This introduced two issues:

  • component from childA (bar) will now be available via childB and parent while logically thinking – it should not.
  • we have temporal coupling in our code. Depending on the order of containers we’ll use to resolve foo we’ll get different results

So what should happen?

Ok, so now that we agreed that current behavior is indeed flawed (we did, didn’t we?) what are our options for fixing it? We basically have two options (both of which were mentioned in comments to previous post).

It all boils down to scoping. If we have a per-web-request object – should single instance be shared among multiple containers or should it be per-web-request-and-per-container? If we have singleton should it be single instance per container, per container hierarchy or per process?

Let’s consider slightly more complicated picture.

containers_2

Now we have two components for bar, one in childA and one in parent. Also we have one more component; baz, which is registered in childB.

classes_2

Baz has dependency on foo, foo still has dependency on bar. All of these dependencies are optional, and all components are singletons.

There can only be one

We want to scope instances strictly per their container. So that foo is scoped in parent (thus visible from its child containers as well), and bar is scoped per childA. This appears to be the simplest setup, and the most in line with definition of singleton, but then we run into problems outlined above.

We then could add another constraint – dependencies can only face upwards the container hierarchy. We would then get foo with its dependency on bar pulled from parent container, consistently, regardless of the container we’d resolve it from. Moreover, we could resolve baz from childB and its dependency would be properly populated from parent since it comes from a container that is higher in the hierarchy, so we’re safe to pull that dependency.

On the other hand this robs us of one of nice (and often used) side effect of child containers, that is contextual overriding of components from containers higher up the hierarchy.

If we have bar in childA, we’d expect to get foo pulled via childA to have that bar, not parent’s bar.

Or perhaps not?

We can approach the problem from another angle altogether. We can narrow down the scope of a component instance, to the scope of its outermost dependency. What do I mean by that?

When resolving foo from childA the outermost dependency of foo would be the bar living in childA. Therefore the instance of foo would have bar pulled from parent, but scoped in childA. Therefore when we’d request foo again, but this time from childB we’d get another instance of foo, with dependency on bar pulled from parent. This could potentially lead to problems as well, since if you’re depending on the fact that single instance of your component will ever live in the application at given point in time you may end up with problems.

So what do you think? Which approach would you prefer, or is there something I’m missing here?

To those who though I’m seriously going to remove support for nested containers rest assured this is not going to happen. I still think this could be a viable option and I wanted to throw it into the air, but its become such a core feature of each IoC container, that Windsor would look crippled if it didn’t have it, regardless of whether or not it’d be actually needed.

Castle Windsor and Child containers reloaded

This is a follow up to my previous post. I deliberately didn’t discuss the issues that arise when using container hierarchies to get some feedback on usage first.

So what’s the problem?

Consider trivial scenario:

two

We have two components, where foo depends on bar. The dependency is optional, which means that foo can be constructed even when bar is not available.

Then we have the following container hierarchy:

one

We have parent container where foo is registered and two child containers. In one of them we register bar. Both foo and bar are singletons (for sake of simplicity – the example would work for any non-transient lifestyle).

Now we do this

var fooA = childA.Resolve<Foo>();

var fooB = childB.Resolve<Foo>();

What should happen?

Build your conventions

Continuing the theme of conventions in code; I talked about validating the conventions, but I didn’t touch upon one more basic issue. What to base the conventions on? Short answer is – anything you like (as long as it’s black). Long answer is, well – longer.

Strong typed nature of .NET gives us rich set of information we could use to build conventions on. Let’s go over some of them. I will concentrate mostly on single scenario as an example – IoC container registration, but the discussion can be easily extrapolated to cover any other scenario.

Location

Assemblies are the main, most coarse grained building blocks of application. While they carry quite a bit of information we could use (name, version, strong name, culture) it rarely makes sense to use all of that information. Most common usage is directly pointing to our assembly of interest, either by name

GetAssemblyNamed("Acme.Crm.Services");

or in a more strongly typed manner for example via type container in that assembly in cases where we don’t mind having strong reference to that assembly in your composition root.

GetAssemblyContaining<Acme.Crm.EmailSenderService>();

Most of the time that is enough (at least at this level, in this situation).

Directly pointing at certain assembly is generally what we want, and this is the most straightforward and safe way to do that. In some cases (for example extensibility scenarios) you may want to leave the door open for consuming more than one assembly, often unknown at the time of compilation. In this case it may make sense to use as much information as possible to filter out unwanted assemblies as early as possible. This obviously depends on the specifics of the scenario at hand, for example who will be authoring the extensions? If only you (meaning your team/company), you can use information in assembly name and additionally verify the key that the assembly is signed with appropriate key.

GetAssembliesWithNameStartingWith("Acme.Crm.Extensions.")

   .GetAssembliesSingledWithTheSameKeyAs("Acme.Crm.Services");

In case where third party vendors will provide extensions, you may just require that the assembly conforms to certain naming pattern (name contains “.Crm.Extensions.”). This not only gives you quick way of filtering out assemblies you’re not interested in – it also helps you keep your project cleaner. With convention like this just a quick glance at directory with your project’s assemblies will be enough information to tell how many extension assemblies there are.

This obviously is rarely enough – finding right assemblies is often just the first step and there’s a wealth of information elsewhere you can use. Modules are seldom useful as most of the time you have just one per assembly, so I’m mentioning them only for completeness. Within (and between) assemblies namespaces are often used to partition types. They have two important characteristics that can be useful when defining conventions – their name, and hierarchy.

You rarely use namespaces on their own. Most often they are one of a few boundaries you set to narrow down the set of types you’re interested in. For example you may find your repositories like this:

GetAssemblyContaining<CustomerRepository>()

   .GetTypesInNamespaceOf<CustomerRepository>()

   .SomeOtherConstraints()....

Broadly speaking you can find namespaces using similar techniques as with assemblies – strongly typed, as demonstrated in this paragraph or via string.

GetTypesInNamespace("Acme.Crm.Repositories");

While the former is more type safe and refactoring friendly, the latter is more explicit and reads better – you get the information you want directly, without having to go through additional layer of indirection.

It is often valuable to take advantage of namespace hierarchy to use additional layer in the hierarchy to carry additional information. One such case could be to divide services in “Acme.Crm.Services” namespace to sub-namespaces based on their lifestyle, such that singleton CacheProviderService will live in “Acme.Crm.Services.Global” namespace, and per web request UnitOfWorkService would live in “Acme.Crm.Services.PerRequest”.

Type

Types themselves carry a lot of information we can use for conventions: they have meaningful names (hopefully), inherit other types, implement interfaces, have custom attributes, generic constraints etc. There are two popular usages for type names when dealing with conventions.

We often put meaningful suffixes, or more broadly speaking – name our types in a certain, meaningful way. For example type implementing IRepository<Customer> would be named CustomerRepository. Or we may require all our services to have “Service” suffix. While the former helps us keep thins clean and consistent the latter (by some regarded as certain form of dreaded Hungarian notation), often serves as safety net layer for validation purposes, to help you catch cases where you by mistake put a type in inappropriate namespace.

The other common usage is using certain prefixes to build decorator chains. You may have the following types: LoggingCustomerRepository, CachingCustomerRepository, CustomerRepository, LoggingProductRepository, CachingProductRepository, ProductRepository etc. When putting consistent, well known prefixes on your types you can use that information to build decorator chains, so that Logging*foo* will decorate Caching*foo* which will decorate actual *foo* type.

In IoC registration case base types (or interfaces) are very commonly used. Most popular case if layer supertype.

RegisterTypesInheriting<IController>();

registers all controllers in our application.

RegisterClosedVersionsOf(typeof(IRepository<>));

registers all generic repositories we might have. It sometimes may also be worthwile to take greater advantage of generics.

Having interfaces:

public interface IHandler<TCommand> where TCommand: ICommand

{

   // ...something

}

 

public interface ICommand

{

   // ...something

}

we might quite easily tie the two together – handlers to their respective commands.

Attributes, marker interfaces

In early days of IoC containers it was common to see custom Attributes being used for configuring components. In Java (which gained attribute-like abilities much later after .NET) containers tend to prefer this approach even now. While this approach is generally discouraged, using domain specific attributes, or marking interfaces for your components may sometimes be beneficial.

You may for example use attributes to mark components you want to decorate, (put [CachedAttribute] on repositories you want to put cache around). Marking interfaces can be useful for this as well – attributes are preferred solution where you want to associate some additional data with the component (for example cache duration).

Similar information as outlined in case of types, can be used to build conventions around type members. While it usually makes little sense in context of IoC to do so, it’s very useful in other cases – like Rob’s MVVM framework and wiring Can*Foo* with *Foo* property/method pairs.

Wrapping up

Anything can be used as basis for convention. You only have to know what makes sense in given context and looks for opportunities to improve that. There are a few things to keep in mind when deciding upon conventions. Keep them tight – use more than one constraint in order to minimize false positives. For example use namespace, base type and name suffix to find repositories in your application. You will pay small additional price of more reflection you do when starting, but it will save you lots of time trying to figure out why your application is misbehaving later. Make your conventions well known to the entire team. Put short description on a board in your team room, in wiki engine you’re using, on in conventions.txt file in your solution. In addition to that make that rules executable to spot potential mistakes and disconforming code (see my previous post for more detailed discussion on the topic). Don’t be afraid to refine the conventions. Once they are decided upon, they don’t get written in stone. If your application grows and you find them inadequate, adjust them (trying not to loosen them up too much).

Validate your conventions

I’m a big proponent of the whole Convention over configuration idea. I give up some meticulous control over plumbing of my code, and by virtue of adhering to conventions the code just finds its way into the right place. With great power comes great responsibility, as someone once said. You may give up direct control over the process, but you should still somehow validate your code adheres to the conventions, to avoid often subtle bugs.

So what is it about?

Say you have web, MVC application, with controllers, views and such. You say controllers have a layer supertype, common base namespace and common suffix in class name. Great, but what happens if you (or the other guy on your team), fail to conform to these rules? There’s no compiler to raise a warning, since the compiler has no idea about the rules you, (or someone else) came up with. What happens is, most likely your IoC container if you use one, or perhaps routing engine of your application will fail to properly locate the type, and the application will blow into your face during a demo for very important customer.
Take another scenario. Say you’re using AutoMapper, and you came up with convention that entities in namespace Foo.Entities are mapped to DTOs in namespace Foo.Dtos, so that Foo.Entities.Bar gets its mapping to Foo.Dtos.BarDto registered automatically. Again – it won’t pick mapping to FizzbuzzDto when you accidentally put Fizzbuzz entity in Foo.Services. Depending on your test coverage you will either find out sooner or later.

These are usually a hard rules, and both AutoMapper and most IoC frameworks will be able to figure out something is missing and notify you (with an exception). This is a good thing, since you fail fast, and quickly get to fix your rules. You won’t be that lucky all the time though. Say you’re writing an MVVM framework, similar to one Rob built, and create  a convention wiring enable state of some controls bound to method Execute, with property CanExecute on your view model (if you have no idea what the heck I’m talking about, go see Rob’s presentation from Mix10). When you fail to conform to this rule, no exception will occur, world will not end. Merely a  button will be clickable all the time, instead of only when some validity rule is met. This is a bug though, just one harder to spot, which means usually it won’t be you who finds it – your angry users will.

So I have a problem?

So what to do? You can loosen your conventions. If you had an issue due to namespace mismatch, perhaps don’t require namespaces to match, just type names? Or if you forgot to put a Controller suffix on your controller, just ditch that requirement and it will work just fine, right?

Yes it will (probably), or at least you find yourself breaking some other requirement of your convention and feel the need to shed it as well. This however does not fix the problem – it merely replaces it with another – you will have rotten code.

Problem is not that your convention was too rigid (although sometimes this will indeed be the case as well), you not adhering to the convention was the real problem. Conventions are like compiler rules – follow them or don’t consider your code ‘working’. To get compiler-like behavior you need to take this one step further and along with finding code adhering to your conventions, find the code that does not.

Take the AutoMapper example again. We have designated namespace for our convention – Foo.Dtos. For each type in this namespace we require that a matching type in Foo.Entities namespace exists. You should check if an unmatched dto type exists and send some notification in case there is. Also since you require DTOs and only DTOs to live in that namespace you should express that requirement in code – check for types with Dto suffix in other namespaces, and types without this suffix in Foo.Dtos namespace, and in each case send an appropriate notification as well.

This is a form of executable specification. When a new guy/gal comes to your team and starts working on the code base you don’t have to worry about them breaking the code because they don’t know the conventions you follow. The code will tell them when something is wrong, and what to do to make it right. It will also help you keep your code clean and structured. Improving its readability and in the long run, also decreasing possibility of code rot.

What to do?

I still haven’t touched upon one important thing – how and where do you implement the convention validation rules. There are three places I would put convention validation.

  • The bootstrapping code itself. It often makes sense to take advantage of the fact you’re scanning for conventions to perform the conventions bootstrapping and do check for misses as well. Especially when the framework you’re working with does not have any support for conventions built in and you’re doing it all manually.
  • Unit tests. Add tests that call AutoMapper’s ‘AssertConfigurationIsValid’ or in case of homegrown framework, reflect over your model and look for deviations from your conventions. It really is very little work upfront compared to massive return it gives you.
  • NDepend. NDepend’s CQL rules lend themselves quite nicely to this task. They work in a similar manner to unit tests, but they are external to your code. Especially with very nice Visual Studio integration in version 3, it has very low friction, and provides immediate feedback. In addition they are pretty easy to read, lending themselves to the task of being executable documentation.

.NET OSS dependency hell

Paul, whom some of you may know as the maintainer of Horn project, left a comment on my blog, that was (or to be more precise – I think it was) a continuation of series of his tweets about his dissatisfaction with the state of affairs when it comes to dependencies between various OSS projects in .NET space, and within Castle Project in particular.

paul_twitter

I must say I understand Paul, and he’s got some valid points there, so let’s see what can be done about it.

Problems

One of the goals of Castle Project from the very beginning has been modularity of its elements. As castle main page says:

Offering a set of tools (working together or independently) and integration with others open source projects, Castle helps you get more done with less code and in less time.

How do you achieve modularity. Say you have two projects, Foo and Bar that you want to integrate. You could just reference one from the other.

eb1c2cc[1]

This however means that whenever you use Foo, you have to drag Bar with you. For example, whenever you want to use MonoRail, you’d need to drag ActiveRecord with it, along with entire set of its dependencies, and their dependencies, etc.

Instead you employ Dependency Inversion (do not confuse with Dependency Injection). You make your components depend on abstractions, not the implementation. This however means, that in .NET assembly model, you need to introduce third assembly to keep the abstractions in.

51067fb6[1]

Now we have 3 assemblies instead of 2 to integrate two projects. Within Castle itself common abstractions are being kept in Castle.Core.dll. But what if we want to take more direct advantage of one project in another project still maintaining the decoupled structure? We need to extract the functionality bridging the two projects to yet another assembly. Tick – now we have 4 of them.

607f68d4[1]

In this case the FooBar project would be something like ActiveRecord integration facility, which integrates ActiveRecord with Windsor.

When you mix multiple projects together you enter another problem – versioning.

Say you want to integrate few projects together, some of which are interdependent (via bridging assemblies, not shown here for brevity)

69f20a13[1]

Now, once a new version of one of the projects is released, you either have to wait for all the other projects to update their dependency to the latest version, do it yourself (possibly with some help from Horn), or stick to the old version. The situation gets even more complicated when there were some breaking changes introduced, in which case plain recompilation will not do – some actual code would need to be written to compensate for that.

These are the main issues with this model, let’s now look at possible solutions.

Solutions

First thing that comes to mind – if having some assemblies means you’ll need even more assemblies, perhaps you should try to minimize that number? This has already come to our minds. With last wave of releases we performed some integration of projects. EmailSender got integrated into Core, one less assembly. Logging adapters for log4net and nlog were merged into core project, which means they still are separate assemblies (as they bridge Castle with 3rd party projects) but they’re now synced with Core and are released with it, which means this is one less element in your versioning matrix for you to worry about. Similar thing happened with Logging Facility, which now is versioned and released with Windsor itself.

For the next major version, there are suggestions to take this one step further. Merge DynamicProxy with DictionaryAdapter and (parts of) Core into a single assembly; Merge Windsor and MicroKernel (and other parts of Core) into an other assembly. With that you get from 5 assemblies to 2.

That reduces Castle’s internal dependencies, but what about other projects that depend on it? After the recent release, we started a log of breaking changes, along with brief explanation and suggested upgrade paths, to make it easier for applications and frameworks to upgrade. We have yet to see how this plays out.

What else can be done?

This is the actual question to you? What do you think can be done, for Castle specifically, but more broadly – for entire .NET OSS ecosystems to make problems Paul mentioned go away, or at least make them easier to sort out?

Thoughts on C# 4.0’ optional parameters

C# 4.0 is just round the corner and along with it set of nice new additions to the language, including optional parameters. There’s been some historical resistance to add this feature to the language, but here’ it is, and I’m glad it’s coming, or at least I was.

In few words, optional parameters, have their default value specified in the signature of the method. You can then skip them when calling method, and the method will be called with their default values.

So, what’s the deal?

To simplify the current discussion I will refer to the method containing default parameters (Foo in this example) as called method, and to method providing the default value (DateTime.Now getter in the example few paragraphs below) as value provider.

Take a look at the code below. Method Foo has two parameters, but we can call them as if it had none.

defaultParameters_0

Looking good so far, right? Let’s take a look at how Main method looks like in Reflector.

defaultArguments_1

As we can see there’s no magic here – simple compiler trick. Compiler binds the invocation to the method, and them puts the arguments for you. The code looks as if you had written it yourself in earlier version of C#.

Still good, right? So how does exactly the compiler knows what to put on the calling side? Let’s take a look at the compiled signature of the method Foo.

defaultParameters_2

Ha! The values are encoded in DefaultParameterValueAttribute. Do you see the problem here? No? Than let’s try something else. Let’s change the signature of the method to take DateTime instead of int.

defaultParameters_3

Notice we initialize the time to default value of DateTime.Now. All good? So let’s compile the code, shall we?

defaultParameters_4

Oops.

Turns out we can’t. What seems like a really reasonable code is not allowed. Since the default values for arguments are being kept in attributes they must be a compile time constants, which includes primitive types (numeric types and strings), tokens ( typeof, methodof, which is not exposed in C# though ) and nulls. Pretty disappointing right?  This means no new Foo(), no Foo.Bar is allowed. This dramatically limits the range of scenarios where this feature can be used, as most of the time, you’ll want not null, but something else as your default value, in which case you’ll end up creating overloads anyway.

There are some workarounds, like the one described in this book, but they have their own downsides, and it’s not always possible to use them anyway.

This all makes me think – why did the authors of the language decided to provide such crippled implementation of the feature? I’m not an expert in this matter, but I found a simple way in which the feature could be implemented allowing for far greater range of scenarios.

What if…

Let’s re-examine how this feature works.

  • The compiler puts the default values of the arguments in a special attribute type on the called method signature.
  • On the calling side, the compiler reads the values, and puts these of them that were not provided explicitly.

All the work happens at compile time (let’s ignore the dynamic for a moment, we’ll get to that as well) so no additional work at runtime is required. Since the compiler is very powerful, why not go one step further.

I think by simply extending this approach the following scenarios could be enabled.

  • using value returned by static parameterless method (this includes static property getters) as default value.
  • using value returned by instance parameterless method defined on the same type (or base type of the type) as called method is defined on (this would only be applicable for instance methods).
  • using default, parameterless constructors as default value.
  • or if we wanted to extend it further: using value returning by any variant of the above that does take parameters that are allowed to be put in the attribute (including both constants, and values of other parameters).

Let us examine how this could be (I think) achieved.

The spoon does not exist

What we would need is a way to store information which method, or constructor of which type we want to invoke in case no default value is provided. Since tokens are legal in attributes, the existing approach could be extended with something like this:

defaultParameters_5 

We have a way of storing the method. Now, the compiler could easily retrieve the token and invoke the called method.

  • If value provider is static there’s no problem – just call it, regardless of whether called method is an instance or static method.
  • If value provider is instance method, and called method is instance method as well, invoke the value provider on the instance on which called method is being invoked (hence the requirement that value provider must be declared on the same type as called method or its base type).
  • If value provider is instance method but called method is static do not allow (at compile time!) this code to compile, since there’s no instance to call the value provider on.

When it comes to constructors it is even simpler – since we allow only default constructor, at compile time we would check if the type does indeed have default, accessible constructor, and disallow the code to compile otherwise. Then when the called method is being invoked, retrieve the type token and call the default constructor on that type.

What about dynamic code?

I don’t have very intimate knowledge of dynamic code yet, but I think this could work with dynamic code as well. The compiler would put all the information on the call site (new concept introduced in .NET 4.0) and this would be all we need. In C# 1.0 having MethodInfo of a method you could invoke it. Same having System.Type it is easy to find and invoke it’s default constructor using little bit of reflection. I really see no reason why this would not work.

 

What do you think – am I missing part of the picture – is it something terribly wrong with my idea that would restrain it from working? Or maybe C# team didn’t really want (as they used to) implement this feature so they provided only the minimal implementation they needed for other major features, COM interop specifically.

Technorati Tags:

Convention based Dependency Injection with Castle MicroKernel/Windsor

Quiz

Consider the following piece of code (it shows Castle MicroKernel, but since Windsor is built on top of MicroKernel, it works the same way):

IKernel kernel = new DefaultKernel();
kernel.AddComponent("sms", typeof(IAlarmSender), typeof(SmsSender));
kernel.AddComponent("email", typeof(IAlarmSender), typeof(EmailSender));
 
kernel.AddComponent("generator", typeof(AlarmGenerator));
 
AlarmGenerator gen = (AlarmGenerator) kernel["generator"];

Considering AlarmGenerator has a dependency on IAlarmSender, which one of two registered components will it get?

The answer is: the first one.

In this case we registered SmsSender first, so it will get injected. If we switched the order of registration, EmailSender would get injected. It does not matter if you specify a name when registering or not.

I don’t know how other containers handle this problem, but this approach in general is as good as any (and by any I mean any other than ‘throw new IDontKnowWhatToDoException()’).

That was easy, wasn’t it?

Now, let’s get to another one. We have a class:

public class Person
{
 public Person( IClock leftHandClock )
 {
  this.LeftHandClock = leftHandClock;
 }
 
 public IClock LeftHandClock { get; private set; }
 public IClock RightHandClock { get; set; }
}

An interface, and two implementing types:

public interface IClock{}
public class IsraelClock : IClock{}
public class WorldClock : IClock{}

Then, if we register these like this:

IWindsorContainer container = new WindsorContainer().
    AddComponent<IClock, WorldClock>( "leftHandClock" ).
    AddComponent<IClock, IsraelClock>( "rightHandClock" ).
    AddComponent<Person>();
var joe = container.Resolve<Person>();
 

What watches will our Joe wear on each hand?

The answer is: the first one. Again.

So in this case Joe will have a WorldClock on both wrists (the same instance to be exact since components in MicroKernel are singletons by default, but that’s besides the point).

Idea

That’s not that intuitive is it? I mean, you could certainly make it work that way, but that would require you to configure the container for it. This however seems like a perfect place to utilize a Convention Over Configuration approach.

I got this idea yesterday and decided to try to implement it in MicroKernel. I also tweeted about it, and Philip Laureano (author of LinFu, and recently Hiro) and Nate Kohari (author of NInject… and recently NInject2) seem to like that idea.

So here’s my initial naive take at it.

Code

Luckily, thanks to wonderful extensibility of Castle you don’t have to modify the container itself. All you need to do is to extend it, with custom SubDependencyResolver. Here’s my initial naive implementation, which I did in couple of minutes, as a proof of concept.

public class ConventionBasedResolver : ISubDependencyResolver
{
    private IKernel _kernel;
    private IDictionary<DependencyModel, string> _knownDependencies = new Dictionary<DependencyModel, string>();
 
    public ConventionBasedResolver( IKernel kernel )
    {
        if( kernel == null )
        {
            throw new ArgumentNullException( "kernel" );
        }
        this._kernel = kernel;
    }
 
    public object Resolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
    {
        string componentName;
        if (!this._knownDependencies.TryGetValue(dependency, out componentName))
        {
            componentName = dependency.DependencyKey;
        }
        return _kernel.Resolve(componentName, dependency.TargetType);
    }
 
    public bool CanResolve(CreationContext context, ISubDependencyResolver contextHandlerResolver, ComponentModel model, DependencyModel dependency)
    {
        if (this._knownDependencies.ContainsKey(dependency))
            return true;
 
        var handlers = this._kernel.GetHandlers(dependency.TargetType );
 
        //if there's just one, we're not interested.
        if(handlers.Length<2)
            return false;
        foreach( var handler in handlers )
        {
            if( IsMatch( handler.ComponentModel, dependency ) && handler.CurrentState == HandlerState.Valid )
            {
                if( !handler.ComponentModel.Name.Equals( dependency.DependencyKey, StringComparison.Ordinal ) )
                {
                    this._knownDependencies.Add( dependency, handler.ComponentModel.Name );
                }
                return true;
            }
        }
        return false;
    }
 
    private bool IsMatch( ComponentModel model, DependencyModel dependency )
    {
        return dependency.DependencyKey.Equals( model.Name, StringComparison.OrdinalIgnoreCase );
    }
}

Now, we need to add the sub resolver to the container:

container.Kernel.Resolver.AddSubResolver( new ConventionBasedResolver( container.Kernel ) );

And we’re done.

Wrapping up

We now have basic convention based injection of both properties as well as constructor parameters. With a little bit of work we might extend it a further to provide pluggable conventions.

For example matching by namespace (prefer close neighbors), by name (for service IFoo, first try to find if Foo is available, then try DefaultFoo, then FooImpl) or any other. The possibilities are endless.

If you have any ideas on how to extend or best use this approach, leave a comment.