Tag: testing

On Testing: Why write tests?

This post is secod part of my Back to basics: on testing series.

Developers writing tests

People new to software, or coming from organizations where all testing is done by dedicated people often find the idea of developers doing testing bizarre.

testing

After all, we’re the highly trained, educated professionals. We get payed to do the hard bits. Clicking around the UI to see if a label is misaligned or an app crashed surely should be delegated to somebody else, outsourced even.

Truth is, there are various kinds of tests, and while there is value in UI tasting (automated or not) we’ll concetrate at somethig else first.

You write the tests for your future self

The idea of developers writing tests came not from academia but from the field. From people who spent years developing successful software. And as you get out there, and start working with teams of people, on real life big systems that solve real problems, you realise that some things you belived (or in fact were taught) to be true, are not.

We’ll get into the details as we progress with this series, but let’s start with the biggest one:

The code is written, and read, and read, and read and read some more and modified, and read, and modified again, and read some more again and then few months later read yet again and again

You may notice there’s much more reading than writing there. That is true for most code – you don’t just write and forget it, like is the case with coding assignments at unversity. Real-life code is long lived, often expanded and modified and commonly the person doing the expanding is not the person who originally wrote the code. In fact the original author may have already moved on to work elsewhere.

  • How do you know what the code does?
  • How do you know where to make the change?
  • How do you know your change didn’t break any of the assumptions that code interacting with the bit you changed makse about it?
  • How do you know the change you made works the way you wanted it to?
  • How do you know the API you created (in other words the public methods) are easy to use and will make sense to the other developers using it (that may also be you few weeks from now!).

These are the reasons for writing tests. Tests are the most economical way of ensuring the change you will make two months from now fit in the design constrants you designed today. Tests are the fastest way for other developers (or again, future you) to get a feel of how a piece of code works; what it does, and how you’re supposed to use it.

Don’t just take my word for it

In the next part of the series we’ll start looking at actual code, writing some tests, and explaining the concepts around them.

Until then, if you have any questions, or suggestions as to what to cover in future posts of the series, feel free to leave a comment.

On testing

Over the last few years I’ve been mostly blogging about various random topics. Most often those were related to Windsor, discussing its new and upcoming features, announcing releases, and taking in-depth look and its underpinning principles.

Another bucket was one-off posts discussing various aspects of software development, mostly out of larger context and not really caring if you, dear reader, have the experience and understanding of the topic or not.

Back to basics

Recently I had an interesting discussion with several people on twitter about basics. More precisely about basics behind writing tests.

It seems to me, and I’m guilty of that myself, that us, the developers, chasing the latest great thing forget about the basics. When was the last time you read a blogpost about the importance of testing? When was the last time you read about how to write tests, what to look out for, and how to keep the quality of your tests high and the maintenance cost low?

Several years ago I got a lot of value myself by reading blogs that taught me what is important in software, and I wouldn’t be the developer I am today, if it wasn’t for those people, who dedicated their time to share their experience and knowledge.

I may not be subscribing to the right blogs anymore, but I noticed those posts are much less frequent these days. To help fill the void I decided to concentrate a bit more on the fundamentals in the future, starting with what will hopefully evolve into a series of blogposts about tests.

Feedback

Do you think it’s a good idea? Any particular topics you would like me to emphasise, and areas to explore? Let me know in the comments. First post will be coming soon(ish).

Testing framework is not just for writing… tests

Quick question – from the top of your head, without running the code, what is the result of:

var foo = -00.053200000m; 
var result = foo.ToString("##.##");

Or a different one:

var foo = "foo"; 
var bar = "bar"; 
var foobar = "foo" + "bar"; 
var concaternated = new StringBuilder(foo).Append(bar).ToString(); 

var result1 = AreEqual(foobar, concaternated); 
var result2 = Equals(foobar, concaternated);


public static bool AreEqual(object one, object two) 
{ 
    return one == two; 
}

How about this one from NHibernate?

var parent = session.Get<Parent>(1); 

DoSomething(parent.Child.Id); 

var result = NHibernateUtil.IsInitialized(parent.Child);

The point being?

Well, if you can answer all of the above without running the code, we’re hiring. I don’t, and I suspect most people don’t either. That’s fine. Question is – what are you going to do about it? What do you do when some 3rd party library, or part of standard library exhibits unexpected behaviour? How do you go about learning if what you think should happen, is really what does happen?

Scratchpad

I’ve seen people open up Visual Studio, create ConsoleApplication38, write some code using the API in question including plenty of Console.WriteLine along the way (curse whoever decided Client Profile should be the default for Console applications, switch to full .NET profile) compile, run and discard the code. And then repeat the process with ConsoleApplication39 next time.

 

The solution I’m using feels a bit more lightweight, and has worked for me well over the years. It is very simple – I leverage my existing test framework and test runner. I create an empty test fixture called Scratchpad.

scratchpad

scratchpad_fixture

This class gets committed to the VCS repository. That way every member of the team gets their own scratchpad to play with and validate their theories, ideas and assumptions. However, as the name implies, this all is a one-off throwaway code. After all, you don’t really need to test the BCL. One would hope Microsoft already did a good job at that.

If you’re using git, you can easily tell it not to track changes to the file, by running the following command (after you commit the file):

git update-index –assume-unchanged Scratchpad.cs

scratchpad_git

With this simple set up you will have quick place to validate your assumptions (and answer questions about API behaviour) with little friction.

scratchpad_test

So there you have it, a new, useful technique in your toolbelt.

Approval testing – value for the money

I am a believer in the value of testing. However not all tests are equal, and actually not all tests provide value at all. Raise your hand if you’ve ever seen (unit) tests that tested every corner case of trivial piece of code that’s used once in a blue moon in an obscure part of the system. Raise your other hand if that test code was not written by human but generated.

 

As with any type of code, test code is a liability. It takes time to write it, and then it takes even more time to read it and maintain it. Considering time is money, rather then blindly unit testing everything we need to constantly ask ourselves how do we get the best value for the money – what’s the best way to spend time writing code, to write the least amount of it, to best cover the widest range of possible failures in the most maintainable fashion.

Notice we’re optimising quite a few variables here. We don’t want to blindly write plenty of code, we don’t want to write sloppy code, and we want the test code to properly fulfil its role as our safety net, alarming us early when things are about to go belly up.

Testing conventions

What many people seem to find challenging to test is conventions in their code. When all you have is a hammer (unit testing) it’s hard to hit a nail, that not only isn’t really a nail, but isn’t really explicitly there to being with. To make matters worse the compiler is not going to help you really either. How would it know that LoginController not implementing IController is a problem? How would it know that the new dependency you introduced onto the controller is not registered in your IoC container? How would it know that the public method on your NHibernate entity needs to be virtual?

 

In some cases the tool you’re using will provide some level of validation itself. NHibernate knows the methods ought to be virtual and will give you quite good exception message when you set it up. You can verify that quite easily in a simple test. Not everything is so black and white however. One of diagnostics provided by Castle Windsor is called “Potentially misconfigured components”. Notice the vagueness of the first word. They might be misconfigured, but not necessarily are – it all depends on how you’re using them and the tool itself cannot know that. How do you test that efficiently?

Enter approval testing

One possible solution to that, which we’ve been quite successfully using on my current project is approval testing. The concept is very simple. You write a test that runs producing an output. Then the output is reviewed by someone, and assuming it’s correct, it’s marked as approved and committed to the VCS repository. On subsequent runs the output is generated again, and compared against approved version. If they are different the test fails, at which point someone needs to review the change and either mark the new version as approved (when the change is legitimate) or fix the code, if the change is a bug.

 

If the explanation above seems dry and abstract let’s go through an example. Windsor 3 introduced way to programmatically access its diagnostics. We can therefore write a test looking through the potentially misconfigured components, so that we get notified if something on the list changes. I’ll be using ApprovalTests library for that.

[Test] 
pub­lic void Approved_potentially_misconfigured_components() 
{ 
    var con­tainer = new Wind­sor­Con­tainer(); 
    container.Install(FromAssembly.Containing<HomeController>());

    var han­dlers = GetPotentiallyMisconfiguredComponents(container); 
    var mes­sage = new String­Builder(); 
    var inspec­tor = new DependencyInspector(message); 
    fore­ach (IEx­poseDe­pen­den­cy­Info han­dler in han­dlers) 
    { 
        handler.ObtainDependencyDetails(inspector); 
    } 
    Approvals.Approve(message.ToString()); 
}

pri­vate sta­tic IHan­dler[] GetPotentiallyMisconfiguredComponents(WindsorContainer con­tainer) 
{ 
    var host = container.Kernel.GetSubSystem(SubSystemConstants.DiagnosticsKey) as IDi­ag­nos­tic­sHost; 
    var diag­nos­tic = host.GetDiagnostic<IPotentiallyMisconfiguredComponentsDiagnostic>(); 
    var han­dlers = diagnostic.Inspect(); 
    return han­dlers; 
}

What’s important here is we’re setting up the container, getting the misconfigured components out of it, produce readable output from the list and passing it down to the approval framework to do the rest of the job.

Now if you’ve set up the framework to pup-up a diff tool when the approval fails you will be greeted with something like this:

approval_diff

You have all the power of your diff tool to inspect the change. In this case we have one new misconfigured component (HomeController) which has a new parameter, appropriately named missingParameter that the container doesn’t know how to provide to it. Now you either slap yourself in the forehead and fix the issue, if that really is an issue, or approve that dependency, by copying the diff chunk from the left pane to the right, approved pane. By doing the latter you’re notifying the testing framework and your teammates that you do know what’s going on and you know it’s not an issue the way things are going to work. Coupled with a sensible commit message explaining why you chose to approve this difference you get a pretty good trail of exception to the rule and reasons behind them.

 

That’s quite an elegant approach to a quite hard problem. We’re using it for quite a few things, and it’s been giving us really good value for little effort it took to write those tests, and maintain them as we keep developing the app, and the approved files change.

 

So there you have it, a new, useful tool in your toolbelt.

Testing conventions

I already blogged about the topic of validating conventions in the past (here and here). Doing this has been a fantastic way of keeping consistency across codebases I’ve worked on, and several of my colleagues at Readify adopted this approach with great success.

Recently I found myself using this approach even more often and in scenarios I didn’t think about initially. Take this two small tests I wrote today for Windsor.

[TestFixture]
public class ConventionVerification
{
	[Test]
	public void Obsolete_members_of_kernel_are_in_sync()
	{
		var message = new StringBuilder();
		var kernelMap = typeof(DefaultKernel).GetInterfaceMap(typeof(IKernel));
		for (var i = 0; i < kernelMap.TargetMethods.Length; i++)
		{
			var interfaceMethod = kernelMap.InterfaceMethods[i];
			var classMethod = kernelMap.TargetMethods[i];
			Scan(interfaceMethod, classMethod, message);
		}

		Assert.IsEmpty(message.ToString(), message.ToString());
	}

	[Test]
	public void Obsolete_members_of_windsor_are_in_sync()
	{
		var message = new StringBuilder();
		var kernelMap = typeof(WindsorContainer).GetInterfaceMap(typeof(IWindsorContainer));
		for (var i = 0; i < kernelMap.TargetMethods.Length; i++)
		{
			var interfaceMethod = kernelMap.InterfaceMethods[i];
			var classMethod = kernelMap.TargetMethods[i];
			Scan(interfaceMethod, classMethod, message);
		}

		Assert.IsEmpty(message.ToString(), message.ToString());
	}

	private void Scan(MethodInfo interfaceMethod, MethodInfo classMethod, StringBuilder message)
	{
		var obsolete = EnsureBothHave<ObsoleteAttribute>(interfaceMethod, classMethod, message);
		if (obsolete.Item3)
		{
			if (obsolete.Item1.IsError != obsolete.Item2.IsError)
			{
				message.AppendLine(string.Format("Different error levels for {0}", interfaceMethod));
			}
			if (obsolete.Item1.Message != obsolete.Item2.Message)
			{
				message.AppendLine(string.Format("Different message for {0}", interfaceMethod));
				message.AppendLine(string.Format("\t interface: {0}", obsolete.Item1.Message));
				message.AppendLine(string.Format("\t class    : {0}", obsolete.Item2.Message));
			}
		}
		else
		{
			return;
		}
		var browsable = EnsureBothHave<EditorBrowsableAttribute>(interfaceMethod, classMethod, message);
		{
			if (browsable.Item3 == false)
			{
				message.AppendLine(string.Format("EditorBrowsable not applied to {0}", interfaceMethod));
				return;
			}
			if (browsable.Item1.State != browsable.Item2.State || browsable.Item2.State != EditorBrowsableState.Never)
			{
				message.AppendLine(string.Format("Different/wrong browsable states for {0}", interfaceMethod));
			}
		}
	}

	private static Tuple<TAttribute, TAttribute, bool> EnsureBothHave<TAttribute>(MethodInfo interfaceMethod, MethodInfo classMethod, StringBuilder message)
		where TAttribute : Attribute
	{
		var fromInterface = interfaceMethod.GetAttributes<TAttribute>().SingleOrDefault();
		var fromClass = classMethod.GetAttributes<TAttribute>().SingleOrDefault();
		var bothHaveTheAttribute = true;
		if (fromInterface != null)
		{
			if (fromClass == null)
			{
				message.AppendLine(string.Format("Method {0} has {1} on the interface, but not on the class.", interfaceMethod, typeof(TAttribute)));
				bothHaveTheAttribute = false;
			}
		}
		else
		{
			if (fromClass != null)
			{
				message.AppendLine(string.Format("Method {0} has {1}  on the class, but not on the interface.", interfaceMethod, typeof(TAttribute)));
			}
			bothHaveTheAttribute = false;
		}
		return Tuple.Create(fromInterface, fromClass, bothHaveTheAttribute);
	}
}

All they do is ensure that whenever I obsolete a method on the container, I do that consistently between the interface and the class that implements it (setting the same warning message, and the same warning/error flag state). It also validates that I hide the obsolete method from Intellisense for people who have the option enabled in their Visual Studio.

Those are kinds of things, that are important, but they neither cause a compiler error, or compiler warning, nor do they fail at runtime. Those are kinds of things you can validate in a test. Those are small things that make a big difference, and having a comprehensive battery of tests for conventions in your application, can greatly improve confidence and morale of the team.