Unit tests are overrated

Something is rotten in the state of Development. It seems to me we (developers) either ignore testing altogether, leaving it to the Q&A team when we throw them the app over the wall, or we concentrate on unit tests only. Let me make one thing clear before we go any further – unit tests are a fantastic and extremely valuable tool in many cases, and I by no means am trying to discourage anyone from writing unit tests. They have their place in the development pipeline (especially if you’re doing TDD).

While I still usually write a fair amount of unit tests I find the percentage of unit tests in the code I write is getting smaller. I just find unit tests are not the best value for the buck in many cases where previously I would write unit tests unconditionally, not really giving it any thought.

Where would that be?

Let’s stop and think for a second – what unit tests are good at. Unit tests exercise single units of functionality, and they’re the most fine grained tests you write. They tests a method produces the right output. They test a class responds in expected manner to results of invocation of another method, potentially on another object. And they really shine if those algorithms and small scale interactions are complex and/or part of API that is going to be used extensively where they do a really great job at being an executable documentation.

However, for quite a few scenarios unit tests just aren’t the best approach and that’s my goal with this blog post – to make you stop and think – should I write a unit test, or perhaps an integration test, with real, not stubbed out dependencies?

To illustrate the point let me tell you a (real) story. I was called to a client recently to have a look at the issue they were having in their system. The system was composed of an administration website, and a client application that was communicating with the server side via a WCF service. The issue they were having is that some information that was entered on the administrative website, wasn’t displayed properly in the client application. However all the unit tests were passing, for all three elements of the system. As I dug into the codebase I noticed that some transformation was being done to the information coming from the web UI on the web-app end before it was being saved, and them what was supposed to be the same process in reverse was happening on the WCF Service side. Except it wasn’t. The transformation being done changed, and was updated everywhere except for the website (this also reinforces the point I made in my previous blogpost, that changes in application code should almost always be accompanied by changes to tests). So the unit tests between two parts of the system were out of sync, as were their implementations yet unit tests weren’t able to detect that. Only a grander scale, integration test, that would cover both ends of the spectrum, the saving of the information on the website site, and reading it from the same repository on the service side would have caught this kind of change.

At the end, let me share a secret with you – Windsor, has very few unit tests. Most tests there are (and we have close to 1000 of them) exercise entire container with no part stubbed out in simulations of real life scenarios. And that works exceptionally well for us, and lets us cover broad range of functionality from the perspective of someone who is actually using the container in their real application.

So that’s it – do write tests, and do remember to have more than just a single hammer in your tool belt.


Igor Brejc says:

Krzysztof, I don’t think Windsor is a good example to show the benefits of integration tests vs. unit tests. From my experience in writing my own IoC container I can agree that for such scenario integration tests are actually much easier and helpful to write than unit tests, for one simple reason: most things happen in-memory and without a need to interact with the outside environment.
But once your system has external dependencies (file system, network, database…) integration tests become much more difficult to write because you need to set up these dependencies to a known state before executing the test (either through mocking or some other means). Since integration tests usually cover a wider scenario, they become bloated and pretty difficult to understand and maintain.
That’s why I still see the benefits of making sure your code is covered through unit tests and then implementing integration tests for important features/scenarios.


I agree, and think that the whole focus on code coverage has distracted from their purpose. There is little point testing 1-2 line methods which pass through to another layer. I wish there where a way to get Code Coverage to ignore a method that a Dev has made a decision not to test. Perhaps those with a cyclomatic complexity of 1 and that have n statements.

Unit tests are for testing inputs and outputs of a method and shine on complex calculations and data transformation. I think that unless you can think of edge cases to test, there is no point unit testing.

I’ve also found that most bugs in an application (taking one without unit tests) occur in places where it is difficult to unit test (UI/DB layers) in the first place.


The problem is that the difference between unit and integration tests is vague and useless.
I think on tests (and i call them unit tests) but i’ve my mind thinking on specifications. An specification might be well defined in terms of real dependencies AND/OR tests doubles (anything you like, hand rolled test doubles, mock framework, simple stubs or any kind of double). I’ve some specifications using both.
So, i don’t find useful the separation between two. More often i find more value in the separation between “slow tests” and “fast tests”, for instance those who need to do some database or WCF call.

Last but not least, I find more confortable doing TDD *top-down* in most of the cases. So, almost always my first tests use much less test doubles and real dependencies. Because, I don’t have the full picture of what i am doing at the beginning.

I agree with Jose and think that Roy Osherove nailed it in his book writing that the properties of a unit test are:
“It should be automated and repeatable.
It should be easy to implement.
Once it’s written, it should remain for future use.
Anyone should be able to run it.
It should run at the push of a button.
It should run quickly.”

Note that there is nothing there about the scope of SUT – it needs not be a single class. The distinction here is execution speed and working out of the box with no external config needed.

A separate point is that in cases where you care more about the interaction between your system and some external dependency which API is not properly described it’s impossible to simulate it realistically. In this case slow integration tests are the only way to go.