Category: TDD

On testing

Over the last few years I’ve been mostly blogging about various random topics. Most often those were related to Windsor, discussing its new and upcoming features, announcing releases, and taking in-depth look and its underpinning principles.

Another bucket was one-off posts discussing various aspects of software development, mostly out of larger context and not really caring if you, dear reader, have the experience and understanding of the topic or not.

Back to basics

Recently I had an interesting discussion with several people on twitter about basics. More precisely about basics behind writing tests.

It seems to me, and I’m guilty of that myself, that us, the developers, chasing the latest great thing forget about the basics. When was the last time you read a blogpost about the importance of testing? When was the last time you read about how to write tests, what to look out for, and how to keep the quality of your tests high and the maintenance cost low?

Several years ago I got a lot of value myself by reading blogs that taught me what is important in software, and I wouldn’t be the developer I am today, if it wasn’t for those people, who dedicated their time to share their experience and knowledge.

I may not be subscribing to the right blogs anymore, but I noticed those posts are much less frequent these days. To help fill the void I decided to concentrate a bit more on the fundamentals in the future, starting with what will hopefully evolve into a series of blogposts about tests.

Feedback

Do you think it’s a good idea? Any particular topics you would like me to emphasise, and areas to explore? Let me know in the comments. First post will be coming soon(ish).

Approval testing – value for the money

I am a believer in the value of testing. However not all tests are equal, and actually not all tests provide value at all. Raise your hand if you’ve ever seen (unit) tests that tested every corner case of trivial piece of code that’s used once in a blue moon in an obscure part of the system. Raise your other hand if that test code was not written by human but generated.

 

As with any type of code, test code is a liability. It takes time to write it, and then it takes even more time to read it and maintain it. Considering time is money, rather then blindly unit testing everything we need to constantly ask ourselves how do we get the best value for the money – what’s the best way to spend time writing code, to write the least amount of it, to best cover the widest range of possible failures in the most maintainable fashion.

Notice we’re optimising quite a few variables here. We don’t want to blindly write plenty of code, we don’t want to write sloppy code, and we want the test code to properly fulfil its role as our safety net, alarming us early when things are about to go belly up.

Testing conventions

What many people seem to find challenging to test is conventions in their code. When all you have is a hammer (unit testing) it’s hard to hit a nail, that not only isn’t really a nail, but isn’t really explicitly there to being with. To make matters worse the compiler is not going to help you really either. How would it know that LoginController not implementing IController is a problem? How would it know that the new dependency you introduced onto the controller is not registered in your IoC container? How would it know that the public method on your NHibernate entity needs to be virtual?

 

In some cases the tool you’re using will provide some level of validation itself. NHibernate knows the methods ought to be virtual and will give you quite good exception message when you set it up. You can verify that quite easily in a simple test. Not everything is so black and white however. One of diagnostics provided by Castle Windsor is called “Potentially misconfigured components”. Notice the vagueness of the first word. They might be misconfigured, but not necessarily are – it all depends on how you’re using them and the tool itself cannot know that. How do you test that efficiently?

Enter approval testing

One possible solution to that, which we’ve been quite successfully using on my current project is approval testing. The concept is very simple. You write a test that runs producing an output. Then the output is reviewed by someone, and assuming it’s correct, it’s marked as approved and committed to the VCS repository. On subsequent runs the output is generated again, and compared against approved version. If they are different the test fails, at which point someone needs to review the change and either mark the new version as approved (when the change is legitimate) or fix the code, if the change is a bug.

 

If the explanation above seems dry and abstract let’s go through an example. Windsor 3 introduced way to programmatically access its diagnostics. We can therefore write a test looking through the potentially misconfigured components, so that we get notified if something on the list changes. I’ll be using ApprovalTests library for that.

[Test] 
pub­lic void Approved_potentially_misconfigured_components() 
{ 
    var con­tainer = new Wind­sor­Con­tainer(); 
    container.Install(FromAssembly.Containing<HomeController>());

    var han­dlers = GetPotentiallyMisconfiguredComponents(container); 
    var mes­sage = new String­Builder(); 
    var inspec­tor = new DependencyInspector(message); 
    fore­ach (IEx­poseDe­pen­den­cy­Info han­dler in han­dlers) 
    { 
        handler.ObtainDependencyDetails(inspector); 
    } 
    Approvals.Approve(message.ToString()); 
}

pri­vate sta­tic IHan­dler[] GetPotentiallyMisconfiguredComponents(WindsorContainer con­tainer) 
{ 
    var host = container.Kernel.GetSubSystem(SubSystemConstants.DiagnosticsKey) as IDi­ag­nos­tic­sHost; 
    var diag­nos­tic = host.GetDiagnostic<IPotentiallyMisconfiguredComponentsDiagnostic>(); 
    var han­dlers = diagnostic.Inspect(); 
    return han­dlers; 
}

What’s important here is we’re setting up the container, getting the misconfigured components out of it, produce readable output from the list and passing it down to the approval framework to do the rest of the job.

Now if you’ve set up the framework to pup-up a diff tool when the approval fails you will be greeted with something like this:

approval_diff

You have all the power of your diff tool to inspect the change. In this case we have one new misconfigured component (HomeController) which has a new parameter, appropriately named missingParameter that the container doesn’t know how to provide to it. Now you either slap yourself in the forehead and fix the issue, if that really is an issue, or approve that dependency, by copying the diff chunk from the left pane to the right, approved pane. By doing the latter you’re notifying the testing framework and your teammates that you do know what’s going on and you know it’s not an issue the way things are going to work. Coupled with a sensible commit message explaining why you chose to approve this difference you get a pretty good trail of exception to the rule and reasons behind them.

 

That’s quite an elegant approach to a quite hard problem. We’re using it for quite a few things, and it’s been giving us really good value for little effort it took to write those tests, and maintain them as we keep developing the app, and the approved files change.

 

So there you have it, a new, useful tool in your toolbelt.

Unit tests are overrated

Something is rotten in the state of Development. It seems to me we (developers) either ignore testing altogether, leaving it to the Q&A team when we throw them the app over the wall, or we concentrate on unit tests only. Let me make one thing clear before we go any further – unit tests are a fantastic and extremely valuable tool in many cases, and I by no means am trying to discourage anyone from writing unit tests. They have their place in the development pipeline (especially if you’re doing TDD).

While I still usually write a fair amount of unit tests I find the percentage of unit tests in the code I write is getting smaller. I just find unit tests are not the best value for the buck in many cases where previously I would write unit tests unconditionally, not really giving it any thought.

Where would that be?

Let’s stop and think for a second – what unit tests are good at. Unit tests exercise single units of functionality, and they’re the most fine grained tests you write. They tests a method produces the right output. They test a class responds in expected manner to results of invocation of another method, potentially on another object. And they really shine if those algorithms and small scale interactions are complex and/or part of API that is going to be used extensively where they do a really great job at being an executable documentation.

However, for quite a few scenarios unit tests just aren’t the best approach and that’s my goal with this blog post – to make you stop and think – should I write a unit test, or perhaps an integration test, with real, not stubbed out dependencies?

To illustrate the point let me tell you a (real) story. I was called to a client recently to have a look at the issue they were having in their system. The system was composed of an administration website, and a client application that was communicating with the server side via a WCF service. The issue they were having is that some information that was entered on the administrative website, wasn’t displayed properly in the client application. However all the unit tests were passing, for all three elements of the system. As I dug into the codebase I noticed that some transformation was being done to the information coming from the web UI on the web-app end before it was being saved, and them what was supposed to be the same process in reverse was happening on the WCF Service side. Except it wasn’t. The transformation being done changed, and was updated everywhere except for the website (this also reinforces the point I made in my previous blogpost, that changes in application code should almost always be accompanied by changes to tests). So the unit tests between two parts of the system were out of sync, as were their implementations yet unit tests weren’t able to detect that. Only a grander scale, integration test, that would cover both ends of the spectrum, the saving of the information on the website site, and reading it from the same repository on the service side would have caught this kind of change.

At the end, let me share a secret with you – Windsor, has very few unit tests. Most tests there are (and we have close to 1000 of them) exercise entire container with no part stubbed out in simulations of real life scenarios. And that works exceptionally well for us, and lets us cover broad range of functionality from the perspective of someone who is actually using the container in their real application.

So that’s it – do write tests, and do remember to have more than just a single hammer in your tool belt.

Unit tests – keep it simple (KISS)!

I have a tendency to overcomplicate things sometimes. Unit tests, are one such piece of code, that you want to keep as simple and clean as possible, so that other developers can easily find out how your class-or-method-under-test is supposed to work. There’s a principle for that that says it very strongly – “Keep it simple, stupid.”

Don’t make it flexible when it does not have to be (just … hard code it). For example, when I was working on IInterceptorSelector support for Castle DynamicProxy, I had to test proxies without target, where return value is set by an interceptor.

My first attempt at it was to create ReturnDefaultOfTInterceptor class that for any given return type, would instantiate it with parameterless constructor. This solution would provide what I needed, but it was unnecessarily complicated and made the test less readable. I didn’t need a helper class (that’s what the interceptor was in this test) that would handle just any return type of just any method. For the test I was calling one method, with one return type, and that’s all I needed from my helper interceptor.

Once I realize that I changed the test to something like this:

[Test]
public void SelectorWorksForInterfacesOfInterfaceProxyWithoutTarget()
{
    var options = new ProxyGenerationOptions
    {
        Selector = new TypeInterceptorSelector<ReturnFiveInterceptor>()
    };
    var proxy = generator.CreateInterfaceProxyWithoutTarget(
    typeof(ISimpleInterface),
        new[] { typeof(ISimpleInterfaceWithProperty) }, 
        options,
        new ReturnFiveInterceptor()) 
    as ISimpleInterfaceWithProperty;
    Assert.IsNotNull(proxy);
    var i = proxy.Age;
    Assert.AreEqual(5, i);
} 

The ReturnFiveInterceptor class name clearly sais what the class does. Also the assertion in the last line is more obvious now.

So, keep your tests simple and readable, don’t generalize where you don’t really need to, but know your limits, because when you cross the line you’re on the fragile tests territory.

Technorati Tags: , ,