What’s new in Windsor 3: Service Locator usage detection

As we’re nearing the release date of Castle Windsor 3.0 (codename Wawel) I will be blogging about some of the new features and improvements introduced in the new version.

One of the features that were introduced in current version 2.5 was support for debugger diagnostic views in Windsor.

In Wawel one of the improvements is addition of a new diagnostic – detection of cases where the container is being used as Service Locator. For those unfamiliar with what Service Locator is, it’s an approach that breaks the basic rule of Inversion of Control, by explicitly calling out to the container from within application.

Here’s where I tell you Service Locator is bad

I spent last two days reading through the StackOverflow archive of IoC related questions and one of the most common sources of problems is that container is being used as Service Locator.

I wrote why Service Locator (particularly when implemented via IoC container) is a bad idea on few occasions (for example here). Also Mark has a good overview of drawbacks of this approach so I won’t rehash it again.

Good news is – even if someone from your team starts using the container as Service Locator, Windsor will now help you detect those cases.

Here’s where I give you an example

Take a look at the following class:

[Singleton]
public class ComponentFactory
{
    private readonly IKernel kernel;

    public ComponentFactory(IKernel kernel)
    {
        this.kernel = kernel;
    }

    public object Create(String name)
    {
        return kernel.Resolve<object>(name);
    }
}

It is part of the application layer (and not infrastructure layer) and as such it should not be aware of the container, yet it depends on IKernel. Also quick look at its Create method is enough to see that it most likely is passed around throughout the application and used to pull components from the container without giving it much thought, which as you’re surely guessing by now is a recipe for trouble.

Here’s where I show you how it looks

If you navigate in debug mode to a container instance where components like the one described above exists a new entry in the debugger view will appear listing those components.

sl-detection

Here’s where I tell you how it works

Windsor doesn’t have the whole picture of your application, so it can only detect a subset of cases of Service Locator usage. In particular it is unable to detect the case when a Service Locator is built completely on top of the container as commonly found, where container is assigned to a public field of a static class and accessed through that field.

Since Windsor only knows about the components you register with it, it looks for components that depend on the container and are not extending infrastructure of the container itself (like for example Interceptors are). All such components are flagged as potential service locator and presented to you.

I hope you’ll find this feature useful.

Here’s where you tell me what you think

Address bar search: Firefox 4 vs. Chrome 9

I’ve been a long time Firefox user. Recently though (over last couple of months) I’ve been using Chrome more and more until it became my default browser. Both are great browsers and both have strengths and weaknesses that the other one doesn’t have. One such thing I noticed today is how easily I can find a website I have visited previously by typing in some keyword (or part of it) to my address bar.

 

firefox_search

search_chrome

I have actually visited much more NHibernate related sites on Chrome recently, yet it failed to provide pretty much any relevant results. Firefox on the other hand just simply rocks by suggesting based on not just address but also title of the site, and some other metrics.

 

That’s a killer feature, right there.

Working with NHibernate without default constructors

In the previous post we established that the usage of default constructor required by NHibernate is theoretically not required. The fact that NHibernate does use them though has to do with technical limitations of CLR. Turns out in most cases there is a workaround, which is not perfect but was a fun experiment to implement.

The code presented here is just an experiment. It is not of production quality – it is merely a proof of concept so if you use it in production and the world blows up, don’t blame me.

Interceptor

The place that NHibernate lets you plug into to instantiate entities and components as they are pulled from database is in NHibernate’s IInterceptor (not to be confused with Castle DynamicProxy IInerceptor) interface and its instantiate method. NHibernate has also a IObjectsFactory interface but that one is used to instantiate everything including NHibernate’s own infrastructure (like IUserTypes for example) and we want to override only activation of entities and components.

public class ExtensibleInterceptor : EmptyInterceptor
{
	private readonly INHibernateActivator activator;

	public ISessionFactory SessionFactory { get; set; }

	public ExtensibleInterceptor(INHibernateActivator activator)
	{
		this.activator = activator;
	}

	public override object Instantiate(string clazz, EntityMode entityMode, object id)
	{
		if (entityMode == EntityMode.Poco)
		{
			var type = Type.GetType(clazz);
			if (type != null && activator.CanInstantiate(type))
			{
				var instance = activator.Instantiate(type);
				SessionFactory.GetClassMetadata(clazz).SetIdentifier(instance, id, entityMode);
				return instance;
			}
		}
		return base.Instantiate(clazz, entityMode, id);
	}
}

The code here is straightforward. The INHibernateActivator is a custom interface that we’ll discuss in details below. That’s where the actual activation happens. We’re also using ISessionFactory so that we properly set the id of retrieved instance.

Proxy validator

Turns out that is not enough. In the interceptor we activate objects we fully obtain from the database. However big part of NHibernate performance optimisation is to ensure we don’t load more data that we need (a.k.a. lazy loading) and to do it transparently NHibernate uses proxies.

For every persistent type NHibernate uses object called proxy validator to ensure object meets all the requirements it has to be able to properly proxy a type and that work is done by IProxyValidator.

Part of the work of proxy validator is to check for default constructor so we need to override that logic to reassure NHibernate we have things under control.

public class CustomProxyValidator : DynProxyTypeValidator
{
	private const bool iDontCare = true;

	protected override bool HasVisibleDefaultConstructor(Type type)
	{
		return iDontCare;
	}
}

Proxy factory

Now that we told NHibernate we can handle proxying types without default constructor it’s time to write code that actually does it. That’s the task of ProxyFactory which uses Castle DynamicProxy under the cover

public class CustomProxyFactory : AbstractProxyFactory
{
	protected static readonly IInternalLogger log = LoggerProvider.LoggerFor(typeof (CustomProxyFactory));
	private static readonly DefaultProxyBuilder proxyBuilder = new DefaultProxyBuilder();
	private readonly INHibernateActivator activator;

	public CustomProxyFactory(INHibernateActivator activator)
	{
		this.activator = activator;
	}

	public override INHibernateProxy GetProxy(object id, ISessionImplementor session)
	{
		try
		{
			var proxyType = IsClassProxy
				                ? proxyBuilder.CreateClassProxyType(
				                	PersistentClass,
				                	Interfaces,
				                	ProxyGenerationOptions.Default)
				                : proxyBuilder.CreateInterfaceProxyTypeWithoutTarget(
				                	Interfaces[0],
				                	Interfaces,
				                	ProxyGenerationOptions.Default);

			var proxy = activator.Instantiate(proxyType);

			var initializer = new LazyInitializer(EntityName, PersistentClass, id, GetIdentifierMethod, SetIdentifierMethod,
				                                    ComponentIdType, session);
			SetInterceptors(proxy, initializer);
			initializer._constructed = true;
			return (INHibernateProxy) proxy;
		}
		catch (Exception e)
		{
			log.Error("Creating a proxy instance failed", e);
			throw new HibernateException("Creating a proxy instance failed", e);
		}
	}

	public override object GetFieldInterceptionProxy()
	{
		var proxyGenerationOptions = new ProxyGenerationOptions();
		var interceptor = new LazyFieldInterceptor();
		proxyGenerationOptions.AddMixinInstance(interceptor);
		var proxyType = proxyBuilder.CreateClassProxyType(PersistentClass, Interfaces, proxyGenerationOptions);
		var proxy = activator.Instantiate(proxyType);
		SetInterceptors(proxy, interceptor);
		SetMixin(proxy, interceptor);

		return proxy;
	}

	private void SetInterceptors(object proxy, params IInterceptor[] interceptors)
	{
		var field = proxy.GetType().GetField("__interceptors");
		field.SetValue(proxy, interceptors);
	}

	private void SetMixin(object proxy, LazyFieldInterceptor interceptor)
	{
		var fields = proxy.GetType().GetFields(BindingFlags.Public | BindingFlags.Instance);
		var mixin = fields.Where(f => f.Name.StartsWith("__mixin")).Single();
		mixin.SetValue(proxy, interceptor);
	}
}

It is pretty standard and not very interesting. We’re again using the INHibernateActivator, we’ve yet to discuss, to create instances. Also since we’re not calling any constructors to do it, we have to set the proxy dependencies that would otherwise by provided via constructor inline. For that we’re using some internal knowledge about how DynamicProxy lays out its proxy types.

Good news is – we’re almost there. Actually we might want to also override NHibernate reflection optimizer. As its name implies replaces some actions that are performed via reflection with faster, compiled versions. The default one works with assumption that default constructor is available and may fail otherwise but we can disable that feature for our simple proof of concept application.

Proxy factory factory

There’s one more step actually – we need to attach out proxy factory and proxy validator to session factory and we do that via ridiculously named IProxyFactoryFactory

public class CustomProxyFactoryFactory : IProxyFactoryFactory
{
	public IProxyFactory BuildProxyFactory()
	{
		return new CustomProxyFactory(new NHibernateActivator());
	}

	public bool IsInstrumented(Type entityClass)
	{
		return true;
	}

	public bool IsProxy(object entity)
	{
		return (entity is INHibernateProxy);
	}

	public IProxyValidator ProxyValidator
	{
		get { return new CustomProxyValidator(); }
	}
}

Putting it all together

Now that we have all these pieces let’s see how we put them together (with some help from FluentNHibernate)

var config = Fluently.Configure()
	.Mappings(c => c.FluentMappings.AddFromAssemblyOf<Program>())
	.Database(MsSqlConfiguration.MsSql2008
				.ConnectionString(s => s.Database("NHExperiment").Server(".").TrustedConnection())
				.ProxyFactoryFactory<CustomProxyFactoryFactory>())
	.BuildConfiguration();
var interceptor = new ExtensibleInterceptor(new NHibernateActivator());
config.Interceptor = interceptor;
NHibernate.Cfg.Environment.UseReflectionOptimizer = false;
var factory = config.BuildSessionFactory();
interceptor.SessionFactory = factory;

We set up our standard stuff – mapping and connection string. Then we need to tell NHibernate to use our custom ProxyFactoryFactory. Then we build configuration object and set our custom interceptor to it. Having done that we can build session factory and give the interceptor reference to the session factory. Certainly not the smoothest ride but gets the job done. Oh – and as I mentioned we disable reflection optimizer so that we don’t have to override yet another class (two actually).

That is all it takes to be able to get rid of all the seemingly superfluous constructors and have a much simpler model.

Well not quite – what about the activator

Right – the activator. How do we make that happen? Here’s all the code in this class.

public class NHibernateActivator : INHibernateActivator
{
	public bool CanInstantiate(Type type)
	{
		return true;
	}

	public object Instantiate(Type type)
	{
		return FormatterServices.GetUninitializedObject(type);
	}
}

Remember how I said that NHibernate can be viewed as fancy serializer? Here we make it obvious by using System.Runtime.Serialization.FormatterServices class to give us an uninitialized instance of given type. Uninitialized means that all the fields will be null or 0 and no code will be ran. This however is precisely what we want for the reasons outlined in previous post. We then throw the object to NHibernate machinery to perform all the necessary initialization for us, so that when the object is returned from the session it is fully created and usable. We could also implement a mechanism similar to one of the options that standard .NET serialization provides to allow the object to initialize its volatile state.

Final words

That is all it takes to make the model a bit more persistence ignorant. Like I said this approach won’t work always. It requires full trust environment and probably if you have complex volatile state the solution will be too simplistic. I would be interested to hear what do you think about the approach in general. Can you spot any severe shortcomings or flaws? Do you like it? Would you use it?

NHibernate and default constructors

One of the first things you learn about NHibernate is that in order for it to be able to construct your instances and take advantage of lazy loading every persistent class must have the default, parameterless constructor. This leads to having entities looking like this (standard blog with posts and comments example).

 public class Post { private readonly IList<comment> comments = new List<comment>();
    private Blog blog;

    [Obsolete("For NHibernate")]
    protected Post()
    {
       
    }

    public Post(string title, string body, Blog blog)
    {
        Blog = blog;
        Title = title;
        Body = body;
        Published = DateTimeOffset.Now;
    }

    public virtual int Id { get; private set; }
    public virtual DateTimeOffset Published { get; private set; }

    public virtual string Title { get; set; }
    public virtual string Body { get; set; }


    public virtual IEnumerable<comment> Comments
    {
        get { return comments; }
    }

    public virtual Blog Blog
    {
        get { return blog; }
        set
        {
            if (blog != null)
            {
                throw new InvalidOperationException("already set");
            }
            blog = value;
            if (blog != null)
            {
                blog.AddPost(this);
            }
        }
    }

    [EditorBrowsable(EditorBrowsableState.Never)]
    protected internal virtual void AddComment(Comment comment)
    {
        comments.Add(comment);
    }
}

Notice the first constructor. It doesn’t do anything. As the obsolete message (which is out there to get compilation warning in case some developer accidently calls this constructor in their code) points out – this constructor is there only so that NHibernate can do its magic. Some people do put there initialization logic for collections, (especially when you use automatic properties) but I use fields and initialize them inline. In that case the constructor is pretty much useless. I started thinking why is it even there and that perhaps it doesn’t really belong to the class. But let’s start at the beginning.

What is a constructor

As basic as the question may seem, it is useful to remind ourselves why we need constructors at all. The best book about C# and .NET defines them as follows:

Constructors are special methods that allow an instance of a type to be initialized to a good state.

Notice two important things about this definition. First, it doesn’t say that constructor creates the instance or that constructors are the only way to create the instance. Second, constructors initialize newly created object to their initial state so that anything that uses the object afterwards deals with fully constructed, valid object.

Constructors and persistence

The above definition very well applies to the other constructor we have on the Post class. That constructor initializes the Post to a valid state. In this case valid means the following.

  • Post is part of a blog – we can’t have a post that lives on its own. Our posts need to be part of a blog and we make this requirement explicit by requiring a Blog instance to be provided when constructing Post.
  • Post requires a title and a body and that’s why we also require those two properties to be provided when constructing a post.
  • Posts are usually displayed in a inverse chronological order hence we set the Published timestamp.

We do none of the above in the other, “nhibernate” constructor. That means that according to the definition of a constructor it is not really doing what a constructor is supposed to be doing. It is never used to construct an object.

Hydration

Let’s take a step back now. What NHibernate is doing with objects in nutshell is serialization. You create an object in your code and initialize it using constructor, do some stuff with it and then you save the object away, so that it can be retrieved later, after your app has been closed, or perhaps on another server instance. You save away the state of the object so that the state representation of the object can live longer than volatile, in-memory representation of the object. If you follow this path of though the next obvious conclusion is that if you have a load-balanced system and two server instances work with Post#123 they both are dealing with the same object, even though they are two separate machines.

The conclusion of that is that when NHibernate is retrieving an object from the persistent store it is not constructing it. It is recreating an in-memory representation of an object that had been created and persisted previously. Hence we are merely recreating object that already has a well known state and had been initialized and just providing different representation for it. This process is called hydration.

Persistent and Volatile state

The full picture is a bit more complicated than what I painted so far. The database and in-memory object are two representation of the same entity but they don’t have to be fully one to one. Specifically it is possible for the in-memory representation to have state beyond the persistent state. In other words the in-memory object may have some properties that are specific to it, and not relevant to the in-database representation. A convenient example that most people will be able to relate to would be a logger. Please don’t quote me as advocating using logging in your entities but logger is one of the things you may want to have on your in-memory object and use it while executing code in your application but then let them go once you no longer need the object and not persist them. If we had one in the Post class the empty constructor would change to the following:

[Obsolete("For NHibernate")]
protected Post()
{
    logger = LoggerProvider.LoggerFor(typeof(Post));
}

If we don’t use constructor for recreation of the object, how can we get the logger in? How do we make NHibernate hold the contructor semantics and give us fully initialized object? Remember I said one way of looking at NHibernate from the object’s perspective is that’s just a fancy serializer/deserializer. Turns out serialization mechanism in .NET offers us four(that I know of, possibly more) ways of tacking this issue

  • you can use serialization surrogate that knows how to recreate the full state of the object
  • you can use deserialization callback interface to be notified when the object has been fully deserialized and then initialize the volatile state.
  • you can use serialization callback attributes to be notified when various steps in the serialization/deserialization process happen and initialize the volatile state.
  • you can use ISerializable interface and implement serialization constructor which is used to deserialize the object.

Notice that only one of those approaches uses special constructor. Since as we discussed NHibernate doesn’t really need the default constructor (in theory that is), can we really get rid of it? Turns one we can (in most cases), and we’ll look at how to do it in the next post.

Testing conventions

I already blogged about the topic of validating conventions in the past (here and here). Doing this has been a fantastic way of keeping consistency across codebases I’ve worked on, and several of my colleagues at Readify adopted this approach with great success.

Recently I found myself using this approach even more often and in scenarios I didn’t think about initially. Take this two small tests I wrote today for Windsor.

[TestFixture]
public class ConventionVerification
{
	[Test]
	public void Obsolete_members_of_kernel_are_in_sync()
	{
		var message = new StringBuilder();
		var kernelMap = typeof(DefaultKernel).GetInterfaceMap(typeof(IKernel));
		for (var i = 0; i < kernelMap.TargetMethods.Length; i++)
		{
			var interfaceMethod = kernelMap.InterfaceMethods[i];
			var classMethod = kernelMap.TargetMethods[i];
			Scan(interfaceMethod, classMethod, message);
		}

		Assert.IsEmpty(message.ToString(), message.ToString());
	}

	[Test]
	public void Obsolete_members_of_windsor_are_in_sync()
	{
		var message = new StringBuilder();
		var kernelMap = typeof(WindsorContainer).GetInterfaceMap(typeof(IWindsorContainer));
		for (var i = 0; i < kernelMap.TargetMethods.Length; i++)
		{
			var interfaceMethod = kernelMap.InterfaceMethods[i];
			var classMethod = kernelMap.TargetMethods[i];
			Scan(interfaceMethod, classMethod, message);
		}

		Assert.IsEmpty(message.ToString(), message.ToString());
	}

	private void Scan(MethodInfo interfaceMethod, MethodInfo classMethod, StringBuilder message)
	{
		var obsolete = EnsureBothHave<ObsoleteAttribute>(interfaceMethod, classMethod, message);
		if (obsolete.Item3)
		{
			if (obsolete.Item1.IsError != obsolete.Item2.IsError)
			{
				message.AppendLine(string.Format("Different error levels for {0}", interfaceMethod));
			}
			if (obsolete.Item1.Message != obsolete.Item2.Message)
			{
				message.AppendLine(string.Format("Different message for {0}", interfaceMethod));
				message.AppendLine(string.Format("\t interface: {0}", obsolete.Item1.Message));
				message.AppendLine(string.Format("\t class    : {0}", obsolete.Item2.Message));
			}
		}
		else
		{
			return;
		}
		var browsable = EnsureBothHave<EditorBrowsableAttribute>(interfaceMethod, classMethod, message);
		{
			if (browsable.Item3 == false)
			{
				message.AppendLine(string.Format("EditorBrowsable not applied to {0}", interfaceMethod));
				return;
			}
			if (browsable.Item1.State != browsable.Item2.State || browsable.Item2.State != EditorBrowsableState.Never)
			{
				message.AppendLine(string.Format("Different/wrong browsable states for {0}", interfaceMethod));
			}
		}
	}

	private static Tuple<TAttribute, TAttribute, bool> EnsureBothHave<TAttribute>(MethodInfo interfaceMethod, MethodInfo classMethod, StringBuilder message)
		where TAttribute : Attribute
	{
		var fromInterface = interfaceMethod.GetAttributes<TAttribute>().SingleOrDefault();
		var fromClass = classMethod.GetAttributes<TAttribute>().SingleOrDefault();
		var bothHaveTheAttribute = true;
		if (fromInterface != null)
		{
			if (fromClass == null)
			{
				message.AppendLine(string.Format("Method {0} has {1} on the interface, but not on the class.", interfaceMethod, typeof(TAttribute)));
				bothHaveTheAttribute = false;
			}
		}
		else
		{
			if (fromClass != null)
			{
				message.AppendLine(string.Format("Method {0} has {1}  on the class, but not on the interface.", interfaceMethod, typeof(TAttribute)));
			}
			bothHaveTheAttribute = false;
		}
		return Tuple.Create(fromInterface, fromClass, bothHaveTheAttribute);
	}
}

All they do is ensure that whenever I obsolete a method on the container, I do that consistently between the interface and the class that implements it (setting the same warning message, and the same warning/error flag state). It also validates that I hide the obsolete method from Intellisense for people who have the option enabled in their Visual Studio.

Those are kinds of things, that are important, but they neither cause a compiler error, or compiler warning, nor do they fail at runtime. Those are kinds of things you can validate in a test. Those are small things that make a big difference, and having a comprehensive battery of tests for conventions in your application, can greatly improve confidence and morale of the team.

Tests and issue trackers – how to manage the integration

In highly disciplined teams when a bug is discovered the following happens:

  • test (or most likely a set of tests) is written that fails exposing the bug
  • a ticket in issue tracking system is created
  • a developer fixes the bug, runs all the tests including the new ones and if everything is green, the ticket is resolved

My question is what if some time later (say several weeks after) a developer wants to find which issue relates to the tests he’s looking at, or which tests were documenting the bug he’s looking at. How do you manage the link between the tests and issues in your tracker?

Solution one – file per ticket

Quite common solution to this problem is to have a special folder in your test project and for each ticket have a file named after the ticket’s ID like this:

Tests solution

That works and is quite easy to discover. However the downside of this solution is that it introduces fragmentation. If a bug was found in a shipping module does it really make sense to keep some tests for shipping module in ShippingModuleTests class and some other in CRM_4 class merely because the latter ones were discovered by a tester and not original developer?

Solution two – ticket id in method name

To alleviate that another solution is often used. The tests end up in ShippingModuleTests bug the id of the issue is encoded in the name of the test method. like this:

[Test]
public void CRM_4_Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test]
public void CRM_4_Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

That’s a step in the right direction. It makes the link explicit and you can quickly navigate the relation either direction. However I don’t like it very much, because most of the time I couldn’t care less about the fact that this test documents a long fixed bug. Yet I am constantly reminded about it every time I run my tests.

tests

Solution three – description

The solution I found myself using the most recently is to leverage the description most testing frameworks let you associate with your tests.

[Test(Description = "CRM_4")]
public void Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test(Description = "CRM_4")]
public void Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

This still makes the association explicit and searchable, but doesn’t remind me of it constantly where I don’t care.

tests

What about you? What approach do you employ to manage that?

Unit tests are overrated

Something is rotten in the state of Development. It seems to me we (developers) either ignore testing altogether, leaving it to the Q&A team when we throw them the app over the wall, or we concentrate on unit tests only. Let me make one thing clear before we go any further – unit tests are a fantastic and extremely valuable tool in many cases, and I by no means am trying to discourage anyone from writing unit tests. They have their place in the development pipeline (especially if you’re doing TDD).

While I still usually write a fair amount of unit tests I find the percentage of unit tests in the code I write is getting smaller. I just find unit tests are not the best value for the buck in many cases where previously I would write unit tests unconditionally, not really giving it any thought.

Where would that be?

Let’s stop and think for a second – what unit tests are good at. Unit tests exercise single units of functionality, and they’re the most fine grained tests you write. They tests a method produces the right output. They test a class responds in expected manner to results of invocation of another method, potentially on another object. And they really shine if those algorithms and small scale interactions are complex and/or part of API that is going to be used extensively where they do a really great job at being an executable documentation.

However, for quite a few scenarios unit tests just aren’t the best approach and that’s my goal with this blog post – to make you stop and think – should I write a unit test, or perhaps an integration test, with real, not stubbed out dependencies?

To illustrate the point let me tell you a (real) story. I was called to a client recently to have a look at the issue they were having in their system. The system was composed of an administration website, and a client application that was communicating with the server side via a WCF service. The issue they were having is that some information that was entered on the administrative website, wasn’t displayed properly in the client application. However all the unit tests were passing, for all three elements of the system. As I dug into the codebase I noticed that some transformation was being done to the information coming from the web UI on the web-app end before it was being saved, and them what was supposed to be the same process in reverse was happening on the WCF Service side. Except it wasn’t. The transformation being done changed, and was updated everywhere except for the website (this also reinforces the point I made in my previous blogpost, that changes in application code should almost always be accompanied by changes to tests). So the unit tests between two parts of the system were out of sync, as were their implementations yet unit tests weren’t able to detect that. Only a grander scale, integration test, that would cover both ends of the spectrum, the saving of the information on the website site, and reading it from the same repository on the service side would have caught this kind of change.

At the end, let me share a secret with you – Windsor, has very few unit tests. Most tests there are (and we have close to 1000 of them) exercise entire container with no part stubbed out in simulations of real life scenarios. And that works exceptionally well for us, and lets us cover broad range of functionality from the perspective of someone who is actually using the container in their real application.

So that’s it – do write tests, and do remember to have more than just a single hammer in your tool belt.

Tests are your airbag

Good battery of tests is like good wine. You will not understand its value until you’ve experienced it. There’s few things as liberating in software development as knowing you can refactor fearlessly having confidence your tests will catch any breaking changes.

I don’t have the time not to write tests

I can’t imagine working on Windsor if it wasn’t thoroughly covered by tests. On the other hand, most of the problems I’ve faced on various other projects, could have been avoided had they been better tested. Problem is – they were. Most of the projects I’ve been working on had tests and if you’d inspect code coverage of those tests it would usually yield a reasonably high number. Except code coverage doesn’t tell the full story. What’s important is not just that the code is tested. And that’s because not all code is equal.

public class Event : EntityBase
{
	public Event(string title, string @where, DateTime when, string description, UserInfo generatedBy)
	{
		Title = title;
		Where = @where;
		When = when;
		Description = description;
		GeneratedBy = generatedBy;
	}

	public virtual string Title { get; set; }

	public virtual string Where { get; set; }

	public virtual DateTime When { get; set; }

	public virtual string Description { get; set; }

	public virtual UserInfo GeneratedBy { get; set; }
}

The code as above will have different testing requirements from your price calculating logic if you’re writing an e-commerce application. The higher cyclomatic complexity and the more critical the code is the more attention it requires when testing. That’s pretty easy to comprehend and agree with.

There’s another angle to it, which is often overlooked. Indeed, like in the tale of boiled frog, that’s the reason why the quality of our tests deteriorates – the why of a test. Why and when to write a test is equally important as what to test. And the answer here is quite simple. So simple in fact, that often overlooked or neglected. Whenever something in the code changes the change ought to be documented in test. Whenever your code changes it is most vulnerable. And isn’t it he premise of Agile? To embrace the change?

It is not enough to test your new code. If a change to some code is made and no test was changed or added that should raise a red flag for you – I certainly am going to make that a rule for myself – any change to a non-trivial code must be accompanied by a change in existing tests, and (almost always) by addition of new tests.

Talking Shop Down Under podcast about Windsor

Not too long ago Richard Banks was kind enough to have me on his podcast – Talking Shop Down Under. We chatted a little bit about Windsor, including what we’re working on for the next version. If you have spare 30-something minutes head over to the link below to listen to the podcast. And if you’ve never heard about Talking Shop check out also some of the other episodes – there are some really great interviews with some super smart people.

 

Episode 46 – Castle Windsor with Krzysztof Kozmic

Rant – Avoid ebooks from Packt Publishing

Since I got my Kindle DX over 6 months ago I’ve been buying lots of Ebooks. Some from the kindle store, but actually most of them directly from publishers. mostly O’Reilly, and Pragmatic Bookshelf but also few others. Both publishers offer ebooks in variety of formats, including .mobi which works on Kindle (including Kindle apps for Windows/iOS and Android), epub which works on IPad and many other devices. The books are nicely formatted, including great formatting of source code, links are clickable etc.

android_kindle_pragProg

Above you can see an example of that, from a Prag Prog book, screenshot comes from the awesome Kindle for Android app.

I do this introduction to show you that I am used to a high standard of books. Also the fact that most publishers offer books in several formats (.mobi and .epub as a bare minimum, but most of them also offer PDFs that work on mobile devices and O’Reilly even offers .apk for some books). That’s the quality I’m used to and that I expect. And then I bought a book from another publisher – Packt Publishing.

Please be advised that the following is a rant.

tl;dr version – Packt Publishing absolutely sucks. Don’t buy from them.

Fail one – “yes, we have .epub” (no you don’t)

image

The above comes from their website. I really wanted to get .mobi to read it comfortably on my Kindle, but I figured the epub will do. I can read epub on my phone and I thought I’ll just convert it to mobi if I needed. To my surprise and unlike advertised the .epub version of the book was nowhere to be found. I obviously learned that after I payed. To be fair, I checked few weeks ago and there was the epub version. It’s still a fail as they didn’t let me know in any way that it became available.

Fail two – “here’s your epub” (are you kidding me?)

I thought that (finally) getting the epub version would be the end of my problems. I was wrong – turns out they screwed up the formatting of code samples – there’s no indention at all, which makes reading them a horrible experience (and the book I bought has lots of code samples).

image

Screenshot comes from fantastic Aldiko app for Android. To make sure that the formatting issue is not the fault of the app I checked it on two other apps including one on my PC – it’s broken everywhere.

Fail three – “you can read the .pdf on Kindle” (no I can’t)

Back to the purchase day – oh well, I thought, at least I have the .pdf version of the book. I opened it on my PC and it looked all right. So I copied it to my kindle and to my phone to read it on the bus the following day. The following day though when I tried to open it, it would show me the cover but when I tried to move forward to where, you know – the content is, the Kindle would freeze and show me white screen and .pdf viewer app on my Android would just plainly refuse to operate and crash. So the only option I was left with was to read the pdf on my PC.

Fail four – “Our commitment is to reply to all e-mails within 48 business hours.” (Well, it’s been four months…)

That really pissed me off, so I found contact info and send them an email:

image

I’m still waiting for the response.

Fail five – “We won’t spam you anymore. Promise” (oh, btw, here’s our latest offer)

After giving up on them I noticed that they started sending me spam with offers for new books they publish. I complained and (to my surprise) quickly got an email with apologies and promise they won’t spam me again. However here’s what I found in my inbox today (the mail above, below is the other email from some time earlier):

image 

Bottom line

I now really appreciate the effort other publishers put into the high quality ebooks they produce I’m now even more happy to pay for them, knowing how horribly experience with ebook publishers can be.