On Testing: Why write tests?

This post is secod part of my Back to basics: on testing series.

Developers writing tests

People new to software, or coming from organizations where all testing is done by dedicated people often find the idea of developers doing testing bizarre.

testing

After all, we’re the highly trained, educated professionals. We get payed to do the hard bits. Clicking around the UI to see if a label is misaligned or an app crashed surely should be delegated to somebody else, outsourced even.

Truth is, there are various kinds of tests, and while there is value in UI tasting (automated or not) we’ll concetrate at somethig else first.

You write the tests for your future self

The idea of developers writing tests came not from academia but from the field. From people who spent years developing successful software. And as you get out there, and start working with teams of people, on real life big systems that solve real problems, you realise that some things you belived (or in fact were taught) to be true, are not.

We’ll get into the details as we progress with this series, but let’s start with the biggest one:

The code is written, and read, and read, and read and read some more and modified, and read, and modified again, and read some more again and then few months later read yet again and again

You may notice there’s much more reading than writing there. That is true for most code – you don’t just write and forget it, like is the case with coding assignments at unversity. Real-life code is long lived, often expanded and modified and commonly the person doing the expanding is not the person who originally wrote the code. In fact the original author may have already moved on to work elsewhere.

  • How do you know what the code does?
  • How do you know where to make the change?
  • How do you know your change didn’t break any of the assumptions that code interacting with the bit you changed makse about it?
  • How do you know the change you made works the way you wanted it to?
  • How do you know the API you created (in other words the public methods) are easy to use and will make sense to the other developers using it (that may also be you few weeks from now!).

These are the reasons for writing tests. Tests are the most economical way of ensuring the change you will make two months from now fit in the design constrants you designed today. Tests are the fastest way for other developers (or again, future you) to get a feel of how a piece of code works; what it does, and how you’re supposed to use it.

Don’t just take my word for it

In the next part of the series we’ll start looking at actual code, writing some tests, and explaining the concepts around them.

Until then, if you have any questions, or suggestions as to what to cover in future posts of the series, feel free to leave a comment.

Using Resharper to ease mocking with NSubstitute

The problem

While C# compiler provides a decent level of generic parameter type inference there’s a bunch of scenarios where it does not work, and you have to specify the generic parameters explicitly.

One such case is when you invoke a generic method inline, and the return type is the generic type parameter.

In other words, the following scenario:

resharper_generic_call

In this case the compiler won’t infer the generic parameter of GenericMethod and you have to specify it explicitly.

As helpful as Resharper often is, it also doesn’t currently provide any help (other than placing the cursor at the appropriate location) and you have to specify the type parameter yourself.

Oh the typing!

I’m quite sensitive to friction in my development work, and I found this limitation quite annoying. Especially that the pattern is used quite often.

Specifically the excellent NSubstitute mocking framework is using it for creation of mocks via

Substitute.For<IMyTypeToMock>();

method.

If I wanted to use NSubstitute to provide the argument for my GenericMethod I’d have to do quite a lot of typing:

resharper_step1

type Su or few more characters to position completion on the Substitute class.

Press Enter.

resharper_step2

Type . and assuming the right overload of For is selected press Enter again (or press arrow to pick the right overload before that).

Finally, type enough of the expected type’s name for completion to select it, and press Enter to finish the statement.

resharper_step3

It may not look like much but if you’re writing a lot of tests those things add up. It ends up being just enough repetitive typing to break the flow of thoughts and make you concentrate on the irrelevant mechanics (how to create a mock for IFoo) rather than what your test is supposed to be doing (verifying behavior of UsesIFoo method when a non-null arg is passed).

Resharper to the rescue

While there’s no easy way to solve the general problem, we can use Resharper to help us solve it in this specific case with NSubstitute (or any other API, I’m using NSubstitute as an example here).

For that, we’ll write a Smart Template.

How to: writing a Smart Template

Go to Resharper menu, and pick Templates Explorer…

Make sure you’re on the Live Templates tab, and click New Template.

creating_template_1

For Shortcut select whatever you want to appear in your code completion for the template, for example NSub.

Select the template should be allowed in C# where expression is allowed.

Check Reformat and Shorten qualified references checkboxes and proceed to specifying the template itself, as follows:

NSubstitute.Substitute.For<$type$>()

The $type$ is a template variable, and the grid on the right hand side allows us to associate a macro with it. That’s where the magic happens.

Pick Change macro… and select Guess type expected at this point.

After you’ve done all of that, make sure you save the template.

creating_template_2

Result

Now your template will be available in the code completion.

resharper_step1a

resharper_step2a

All it takes to get to the end result now is to type ns, Enter, Enter.

The main benefit though is not the keystrokes you save, but the fact there’s almost no friction to the process, so you don’t get distracted from thinking about logic of your test.

On testing

Over the last few years I’ve been mostly blogging about various random topics. Most often those were related to Windsor, discussing its new and upcoming features, announcing releases, and taking in-depth look and its underpinning principles.

Another bucket was one-off posts discussing various aspects of software development, mostly out of larger context and not really caring if you, dear reader, have the experience and understanding of the topic or not.

Back to basics

Recently I had an interesting discussion with several people on twitter about basics. More precisely about basics behind writing tests.

It seems to me, and I’m guilty of that myself, that us, the developers, chasing the latest great thing forget about the basics. When was the last time you read a blogpost about the importance of testing? When was the last time you read about how to write tests, what to look out for, and how to keep the quality of your tests high and the maintenance cost low?

Several years ago I got a lot of value myself by reading blogs that taught me what is important in software, and I wouldn’t be the developer I am today, if it wasn’t for those people, who dedicated their time to share their experience and knowledge.

I may not be subscribing to the right blogs anymore, but I noticed those posts are much less frequent these days. To help fill the void I decided to concentrate a bit more on the fundamentals in the future, starting with what will hopefully evolve into a series of blogposts about tests.

Feedback

Do you think it’s a good idea? Any particular topics you would like me to emphasise, and areas to explore? Let me know in the comments. First post will be coming soon(ish).

IoC container solves a problem you might not have but it’s a nice problem to have

On frameworks and libraries

A logging framework helps you log what’s happening in your application. A UI framework helps you render and animate UIs to the user. A communication library helps connecting parts of a distributed system.

All of these tasks and concepts are pretty easy to understand. They are quite down to earth, and therefore, at a high level at least, easy to explain. They produce tangible, additive difference in your application.

Also the code of your application changes in order to use those frameworks and libraries. You add a logging framework, you write some code against its API and you end up with a file on your disk, or some text in your console. You add a communication library, inherit some base classes, implement some interfaces to designate communication endpoints of your application and you can see two processes exchanging information. I hope you get the picture and I don’t need do go on describing the tangible, additive and visible results of using a UI framework.

What about IoC container?

So what about inversion of control containers? There’s a lot of confusion around what they do, and why you should use one at all. Every now and then I meet a developer, who says they read all the definitions, introductions and basic examples, and they still don’t get why would they use a container. Other times I’m having discussions that can be summarised as (slightly exaggerated for dramatic effect):

I got one of the IoC containers, put it in my application, and then all hell broke loose. We got memory leaks, things weren’t appearing where they should, users were seing other users’ data, we had to recycle the pool every hour because we were running out of database connections, and the app got very slow. Ah and also it destroyed maintainability of our code, because we never knew when something is safe to refactor, since we never knew where and who would try to get them from the container.

Let’s ignore the details for now and concentrate on the wider sentiment.

So? Should I use it or not?

The sentiment is one of confusion, scepticism and frustration. Was it a good idea to use the container? Should we have not used it? Would I use it if I was leading the team?

Truth is, those aren’t necessarily the right questions to ask. You can’t answer them in a vacuum. Should I use my phone to twitter when I’m on the bus? While the answer might be “sure, why not” if you’re a passenger, it would be unequivocally “don’t even think about reaching for your phone” if you’re the driver.

I have seen applications where  introducing a container immediately, would only worsen things. I also haven’t worked with any .NET application where, if architected the way like to do things, the container wouldn’t be beneficial in the long term.

It all boils down to the fact that container solves certain problems that arise if and when you build your applications a certain way, and unless you do, you’re not going to see the benefits it brings, simply because the problems container solves will not be the main problems you’ll be facing, or will not appear in your application at all.

What sort of architecture are we talking about?

Container has certain requirements in order to work smoothly.  First and foremost it requires structure and order. At a mechanical level, container takes over certain level of control over low level, mundane details of your  application’s infrastructure. It is however still a tool, and unless the environment where it operates is clean, with well defined limited number of abstractions it will struggle, and so will you, trying to get it to work.

There’s a lot of content in the sentence above so let’s dig a bit deeper.

The assumption containers are making is, that you do have abstractions. Getting a little bit ahead of ourselves, one of the main reason for using the container is to maintain lose coupling in our code. The way we achieve this is by constructing our types in such a way that they do not depend directly on other concrete classes. Regardless if it’s a high level type, or low level type, instead of depending on one another directly, container expects they will depend on abstractions.

Another, related (pre)assumption is you will  build your classes to expose abstractions. And not just in any way, but keep them small and crisp (structure and order, remember). To maintain loose coupling, the abstractions are expected to provide just a single small task, and most classes will provide just a single abstraction.

It is not uncommon for a concrete class to provide more than one abstraction (for example to bridge two parts of the system) but in that case the container assumes you will separate these two responsibilities into two abstractions.

Large number of abstractions in your system will be reusable, that is a single abstraction will be exposed by more than one type. In cases like that, the container assumes all the implementations will obey the contract of the abstraction, that is they will all behave in a way that will not be surprising to anyone consuming the abstraction without knowing what concrete implementation is behind it.

The point above is especially important to provide class level extensibility. Certain types in your system will depend on a collection of dependencies, and you should not have to go back and change them, whenever you add a new type ending up in that collection. The assumption is, the code using the abstractions, as well as the abstractions themselves are built in such a way, that the type can be extended by swapping (or adding new) dependencies it has, without the type itself having to change.

That’s a lot of assumptions, isn’t it?

You may say the container makes quite a number of assumptions (or requirements!) about how your application is structured. You wouldn’t be wrong.

However if you look up and read the previous section again, forgetting that we are talking about container, does this feel like an architecture you should be striving for anyway? Does it feel like good, maintainable object oriented architecture? If it does, then that’s good, because that’s intentional.

A container doesn’t put any artificial or arbitrary cumbersome constraints onto how you should build your application. It doesn’t ask you to compromise any good practices aiming at maximising testability, maintainability and readability of your code and architecture. On the contrary, it encourages and rewards you for that.

It’s the inversion!

Finally we’re at a point where we can talk about some answers. The container makes a lot of assumptions about how your code is structured for two reasons.

That makes the container different from logging framework or communication framework. None of them cares about how your code is structured. As long as you implement correct interfaces or inherit correct base classes, and are able to call the right API they don’t care.

Inversion of control containers are fundamentally different. They do not require your code to implement or inherit anything and there’s no API to call (outside of single place called composition root). Just by looking at code of your application, you can’t tell if there is a container involved or not.

That’s the inversion of control working. This is what confuses so many people when they first start using containers. The difference is so fundamental from most other frameworks and libraries that it may be hard to comprehend and make the switch.

Inversion of control confuses people and makes it harder to see the value a container brings because the value is not tangible and it is not additive.

Contrast that with the examples from the first section. With logging framework you can point a finger at a piece of code and say “There, here’s the logging framework doing its thing”. You can point a finger at your screen, entry in Windows event log, or a file on disk and say “There, here’s the result of logging framework doing its thing”.

Inversion of control container is just “out there”. You can’t see the work that it’s doing by inspecting your code. That ephemeralness and intangibility is what confuses many new users and makes it harder for them to see the value the container brings to the table.

To make things even slightly more confusing for newcomers, when you’re using a container you don’t end up writing code to use the container (again, expect for a little bit of code to wire it up). You end up writing less code as a result of using the container, not more, which again is harder to measure and appreciate than code you can clearly see as a result of using other kinds of frameworks.

What isn’t there…

The second reason why container makes all of these assumptions about your code is very simple. Unless your architecture follows these rules, you simply will not have the problems that the containers are built to solve. And even if you do, they will be far from being your main concern.

Building your application according to the assumptions that the container is built upon (I hope by now you figured out that they are just SOLID principles of good object oriented C# code) your application will be composed of types exposing certain characteristics.

Following Single Responsibility Principle will lead you down the road of having plenty of small classes.

Open Close Principle will lead you down the way of encapsulating the core logic of each type in it, and then delegating all other work to its dependencies, causing most of those small classes to have multiple dependencies.

Interface Segregation Principle will cause you to abstract most of those big number of classes by exposing an interface or an abstract class, further multiplying the number of types in your application.

Liskov Substitution Principle will force you to clearly shape your abstractions so that they can be used in a uniform way.

Finally, if you follow Dependency Inversion Principle your types will not only expose abstractions themselves, but also depend on abstractions rather than concrete types.

Overall you will have a big number of small classes working with a number of abstractions. Each of those classes will be concentrated on its own job and largely ambivalent (and abstracted from) about who is using it, and details of who it is using.

This leaves you with a problem of putting together a group of those small types (let’s call them Components) in order to support a full, real life, end to end scenario. Not only will you have to put them together, but also, maintain and manage them throughout the lifetime of your application.

Each component will be slightly different. For some of them you will want to have just a single instance, that is reused everywhere, for others, you’ll want to provide a new instance each time one is needed, or may be reuse instances within the scope of a web request, or any other arbitrary scope.

Some of them will have decommission logic, like Dispose method. Some of them will have ambient steps associated with their lifecycle, like “subscribe me to certain events in event aggregator when I’m created and them unsubscribe me when I’m about to be destroyed”.

Also thanks to the power of abstractions and guarantees put in place by following Liskov Substitution Principle you’ll want to compose some of the components using a further set of design patterns. You’ll want to create Decorators, Chains of Responsibility, Composites etc.

Not to mention the fact that in addition to depending on abstractions exposed by other components (let’s call them Services) your components may also have some configuration, or other “primitive” dependencies (like connection strings).

I hope you can see that the complexity of it all can quickly grow immensely. If you want to maintain loose coupling between your components, you will have to write a lot of code to put them together and maintain all the aforementioned qualities. The code to do that can quickly grow as well, both in size and complexity.

Container to save the day

Inversion of control containers exist to replace that code. I know it took me a while to get here, and I did it in a very roundabout way, but there you have it – that’s what IoC containers do – they help you stay sane, and reduce friction of development when adhering to good Object Oriented Design principles in language like C#.

I know it’s not a sexy and concise definition, and some of you will probably need to re-read this post for it to “click”. Don’t worry. It didn’t click for me at first either. The value and goal of IoC container is abstract and intangible, and the nomenclature doesn’t help either.

The i-word

Some people feel that a lot of misconceptions and misunderstanding about containers comes from their very name – inversion of control container. If you just hear someone say those words, and leave it to your imagination, you’ll most likely see… blank. Nothing. It’s so abstract, that unless you know what it means, it’s pretty much meaningless.

To alleviate that some people started referring to containers as Dependency Injection container. At first this sounds like a better option. We pretty much understand what dependency in context of relation between two objects is. Injection is also something that you can have a lot associations with, software related or otherwise. Sounds simpler, more down-to-earth, doesn’t it?

It does sound like a better option, but in my experience it ends being a medicine that’s worse than the disease. While it doesn’t leave you with blank stare, it projects a wrong image of what a container does. By using the word injection it emphasises the process of constructing the graph of dependencies, ignoring everything else the container does, especially the fact that it manages the whole lifetime of the object, not just its construction. As a result of that, in my experience, people who think about containers in terms of DI end up misusing the container and facing problems that arise from that.

That’s why, even though I’m quite aware Inversion of Control Container is a poor name, that doesn’t help much in understanding those tools, at least it doesn’t end up causing people to misunderstand them.

In closing

So there you have it. I admire your persistence if you made it that far. I hope this post will help people undersand what a container does, and why putting it in an application that’s not architected for it, may end up causing more problems than benefits.

I hope it made it clearer what having a container in a well designed application brings to the table. In fact I only spoke about the most basic and general usages scenario that every container supports. Beyond that particular containers have additional, sometimes very powerful features that will add additional benefit to your development process.

So even if you feel you don’t have a need for a container right now, having the problems that the container is meant to solve is often a sign of a good, object oriented architecture, and those are good problems to have.

To constructor or to property dependency?

Us, developers, are a bit like that comic strip (from always great xkcd):

Crazy Straws

We can endlessly debate over tabs versus spaces (don’t even get me started), whether to use optional semicolon or not, and other seemingly irrelevant topics. We can have heated, informed debates with a lot of merit, or (much more often) not very constructive exchanges of opinions.

I mention that to explicitly point out, while this post might be perceived as one of such discussions, the goal of it is not to be a flamewar bait, but to share some experience.

So having said that, let’s cut to the chase.

Dependencies, mechanics and semantics

Writing software in an object oriented language, like C#, following established practices (SOLID etc) we tend to end up with many types, concentrated on doing one thing, and delegating the rest to others.

Let’s take an artificial example of a class that is supposed to handle scenario where a user moved and we need to update their address.

public class UserService: IUserService
{
   // some other members

   public void UserMoved(UserMovedCommand command)
   {
      var user = session.Get<User>(command.UserId);

      logger.Info("Updating address for user {0} from {1} to {2}", user.Id, user.Address, command.Address);

      user.UpdateAddress(command.Address);

      bus.Publish(new UserAddressUpdated(user.Id, user.Address));
   }
}

There are four lines of code in this method, and three different dependencies are in use: session, logger and bus. As an author of this code, you have a few options of supplying those dependencies, two of which that we’re going to concentrate on (and by far the most popular) are constructor dependencies and property dependencies.

Traditional school of thought

Traditional approach to that problem among C# developers goes something like this:

Use constructor for required dependencies and properties for optional dependencies.

This option is by far the most popular and I used to follow it myself for a long time. Recently however, at first unconsciously, I moved away from it.

While in theory it’s a neat, arbitrary rule that’s easy to follow, in practice I found it is based on a flawed premise. The premise is this:

By looking at the class’ constructor and properties you will be able to easily see the minimal set of required dependencies (those that go into the constructor) and optional set that can be supplied, but the class doesn’t require them (those are exposed as properties).

Following the rule we might build our class like that:

public class UserService: IUserService
{
   // some other members

   public UserService(ISession session, IBus bus)
   {
      //the obvious
   }

   public ILogger Logger {get; set;}
}

This assumes that session and bus are required and logger is not (usually it would be initialised to some sort of NullLogger).

In practice I noticed a few things that make usefulness of this rule questionable:

  • It ignores constructor overloads. If I have another constructor that takes just session does it mean bus is an optional or mandatory dependency? Even without overloading, in C# 4 or later, I can default my constructor parameters to null. Does it make them required or mandatory?
  • It ignores the fact that in reality I very rarely, if ever, have really optional dependencies. Notice the first code sample assumes all of its dependencies, including logger, are not null. If it was truly optional, I should probably protect myself from NullReferenceExceptions, and in the process completely destroy readability of the method, allowing it to grow in size for no real benefit.
  • It ignores the fact that I will almost never construct instances of those classes myself, delegating this task to my IoC container. Most mature IoC containers are able to handle constructor overloads and defaulted constructor parameters, as well as making properties required, rendering the argument about required versus optional moot.

Another, more practical rule, is this:

Use constructor for not-changing dependencies and properties for ones that can change during object’s lifetime.

In other words, session and bus would end up being private readonly fields – the C# compiler would enforce that once we set them (optionally validating them first) the fields in the constructor, we are guaranteed to be dealing with the same (correct, not null) objects ever after. On the other hand, Logger is up in the air, since technically at any point in time someone can swap it for a different instance, or set it to null. Therefore, what usually logically follows from there, is that property dependencies should be avoided and everything should go through the constructor.

I used to be the follower of this rule until quite recently, but then it does have its flaws as well.

  • It leads to some nasty code in subclasses where the base class has dependencies. One example I saw recently was a WPF view model base class with dependency on dispatcher. Because of that every single (and there were many) view model inheriting from it, needed to have a constructor declared that takes dispatcher and passes it up to the base constructor. Now imagine what happens when you find you need event aggregator in every view model. You will need to alter every single view model class you have to add that, and that’s a refactoring ReSharper will not aid you with.
  • It trusts the compiler more than developers. Just because a setter is public, doesn’t mean the developers will write code setting it to some random values all over the place. It is all a matter of conventions used in your particular team. On the team I’m currently working with the rule that everybody knows and follows is we do not use properties, even settable ones, to reassign dependencies, we’re using methods for that. Therefore, based on the assumption that developers can be trusted, when reading the code and seeing a property I know it won’t be used to change state, so readability and immediate understanding of the code does not suffer.

In reality, I don’t actually think the current one, or few of my last projects had even a requirement to swap service dependencies. Generally you will direct your dependency graphs from more to less volatile objects (that is object will depend on objects with equal or longer lifespan). In few cases where that’s not the case the dependencies would be pulled from a factory, and used only within the scope of a single method, therefore not being available to the outside world. The only scenario where a long-lived dependency would be swapped that I can think of, is some sort of failover or circuit-breaker, but then again, that would be likely dealt with internally, inside the component.

So, looking back at the code I tend to write, all dependencies tend to be mandatory, and all of them tend to not change after the object has been created.

What then?

This robs the aforementioned approaches from their advertised benefits. As such I tend to draw the division along different lines. It does work for me quite well so far, and in my mind, offers a few benefits. The rule I follow these days can be explained as follows:

Use constructor for application-level dependencies and properties for infrastructure or integration dependencies.

In other words, the dependencies that are part of the essence of what the type is doing go into the constructor, and dependencies that are part of the “background” go via properties.

Using our example from before, with this approach the session would come via constructor. It is part of the essence of what the class is doing, being used to obtain the very user whose address information we want to update. Logging and publishing the information onto the bus are somewhat less essential to the task, being more of an infrastructure concerns, therefore bus and logger would be exposed as properties.

  • This approach clearly puts distinction between what’s essential, business logic, and what is bookkeeping. Properties and fields have different naming conventions, and (especially with ReSharper’s ‘Color identifiers’ option) with my code colouring scheme have significantly different colours. This makes is much easier to find what really is important in code.
  • Given the infrastructure level dependencies tend to be pushed up to base classes (like in the ViewModel example with Dispatcher and EventAggregator) the inheritors end up being leaner and more readable.

In closing

So there you have it. It may be less pure, but trades purity for readability and reduces amount of boilerplate code, and that’s a tradeoff I am happy to make.

Modularity is a feature

Stop me if you know this one. You find a library/framework that does something useful to you. You start using it and then realise it doesn’t work the way you want it in certain scenario or has a missing feature.

What do you do then?

  • Abandon the library and look for alternative that is more “feature rich”?
  • Ask the author to support your scenario/submit a pull request with the feature?

Those two, from my experience, are by far the most common approaches people take when faced with this situation.

There is another way

Surprisingly not many people take the third, most beneficial way, that is swapping part of the library for custom one, that is tailored for your needs.

Now, to be honest this has a prerequisite, that is, the library you’re using must be designed in a modular fashion, so that there are swappable parts. Most popular open source libraries however do a fairly good job at this. What this allows you to do, is to make specific assumptions and optimisations for your specific needs, ones that author of a generic library can never make, therefore having much more robust and targeted solution for your problems. Also being able to simply extend and/or swap parts of the library/framework means you don’t have to wait for a new version or waste time looking for and learning a different one only to discover at some later point that it also doesn’t support some scenario you’ve got.

I did just that today with RavenDB (or more specifically Json.NET library it uses internally). The application I’m working on needs to store object graphs from a third party vendor. Objects that weren’t designed with NoSQL storage in mind (or any storage in mind) and this was causing Json.NET some troubles. Not to bore you with the details, I was able to resolve the problem by swapping DefaultContractResolver with my own implementation that catered for quirks of the model we’re using, and in less than 20 lines of code I achieved something remarkable – I was able to store in RavenDB, with no issues, objects that were never meant to be stored in such a way. And all of that without authors of RavenDB or Json.NET having to do anything.

Consider the alternatives

This brings us to the main point of this post – modularity is a feature. It is one of the most important features of any reusable piece of code. Consider the alternatives. If you don’t allow for swapping parts of your code from generic one-size-fits-most solution to scenario specific variants you’re painting yourself into a corner, in one of two ways.

You are writing a very rigid piece of software, that unless used exactly in the way you anticipated, will be unfit for the task.

Alternatively as you discover new scenarios you can try to stretch your default implementation to support them all, adding more and more configuration flags to the API. In the end you will find that for every new scenario you add support for, two new get reported that you don’t support, all while metrics of your code complexity go through the roof and maintainability plummets.

giant-swiss-army-knife-thumb-400x316

So be smart. Whether you’re creating a library or using one, remember about modularity.

lego2

IoC concepts: Dependency

As part of prepar­ing for release of Wind­sor 3.1 I decided to revisit parts of Windsor’s doc­u­men­ta­tion and try to make it more approach­able to some com­pletely new to IoC. This and few previous posts are excerpts from that new doc­u­men­ta­tion. As such I would appre­ci­ate any feed­back, espe­cially around how clearly the con­cepts in ques­tion are explained for some­one who had no prior expo­sure to them.

Dependency

We’re almost there. To get the full picture we need to talk about dependencies first.

A component working to fulfil its service is not living in a vacuum. Just lie a coffee shop depends on services provided by utility companies (electricity), its suppliers (to get the beans, milk etc) most components will end up delegating non-essential aspects of what they’re doing to others.

Now let me repeat one thing just to make sure it’s obvious. Components depend on services of other components. This allows for nicely decoupled code where your coffee shop is not burdened with details of how the milk delivery guy operates.

In addition to depending on other component’s services your components will also sometimes use things that are not components themselves. Things like connectionStrings, names of servers, values of timeouts and other configuration parameters are not services (as we discussed previously) yet are valid (and common) dependencies of a component.

In C# terms your component will declare what dependencies it requires usually via constructor arguments or settable properties. In some more advanced scenarios dependencies of a component may have nothing to do with the class you used as implementation (remember, the concept of a component is not the same as a class that might be used as its implementation), for example when you’re applying interceptors. This is advanced stuff however so you don’t have to concern yourself with it if you’re just starting out.

Putting it all together

So now lets put it all together. To effectively use a container we’re dealing with small components, exposing small, well defined, abstract services, depending on services provided by other components, and on some configuration values to fulfil contracts of their services.

You will end up having many small, decoupled components, which will allow you to rapidly change and evolve your application limiting the scope of changes, but the downside of that is you’ll end up having plenty small classes with multiple dependencies that someone will have to manage.

That is the job of a container.

IoC concepts: Component

As part of preparing for release of Windsor 3.1 I decided to revisit parts of Windsor’s documentation and try to make it more approachable to some completely new to IoC. This and few following posts are excerpts from that new documentation. As such I would appreciate any feedback, especially around how clearly the concepts in question are explained for someone who had no prior exposure to them.

Component

Component is related to service. Service is an abstract term and we’re dealing with concrete, real world. A coffee shop as a concept won’t make your coffee. For that you need an actual coffee shop that puts that concept in action. In C# terms this usually means a class implementing the service will be involved.

public class Starbucks: ICoffeeShop
{
   public Coffee GetCoffee(CoffeeRequest request)
   {
      // some implementation
   }
}

So far so good. Now the important thing to remember is that, just as there can be more than one coffee shop in town, there can be multiple components, implemented by multiple classes in your application (a Starbucks, and a CofeeClub for example).

It doesn’t end there! If there can be more than one Starbucks in town, there can also be more than one component backed by the same class. If you’ve used NHibernate, in an application accessing multiple databases, you probably had two session factories, one for each database. They are both implemented by the same class, they both expose the same service, yet they are two separate components (having different connection strings, they map different classes, or potentially one is talking to Oracle while the other to SQL Server).

It doesn’t end there (still)! Who said that your local French coffee shop can only sell coffee? How about a pastry or fresh baguette to go with the coffee? Just like in real life a coffee shop can serve other purposes a single component can expose multiple services.

One more thing before we move forward. While not implicitly stated so far it’s probably obvious to you by now that a component provides a service (or a few). As such all the classes in your application that do not really provide any services will not end up as components in your container. Domain model classes, DTOs are just a few examples of things you will not put in a container.

IoC concepts: Service

As part of preparing for release of Windsor 3.1 I decided to revisit parts of Windsor’s documentation and try to make it more approachable to some completely new to IoC. This and few following posts are excerpts from that new documentation. As such I would appreciate any feedback, especially around how clearly the concepts in question are explained for someone who had no prior exposure to them.

As every technology, Windsor has certain basic concepts that you need to understand in order to be able to properly use it. Fear not – they may have scary and complicated names and abstract definitions but they are quite simple to grasp.

Service

First concept that you’ll see over and over in the documentation and in Windsor’s API is service. Actual definition goes somewhat like this: “service is an abstract contract describing some cohesive unit of functionality”.

Service in Windsor and WCF service
The term service is extremely overloaded and has become even more so in recent years. Services as used in this documentation are a broader term than for example WCF services.

Now in plain language, let’s imagine you enter a coffee shop you’ve never been to. You talk to the barista, order your coffee, pay, wait and enjoy your cup of perfect Cappuccino. Now, let’s look at the interactions you went through:

  • specify the coffee you want
  • pay
  • get the coffee

They are the same for pretty much every coffee shop on the planet. They are the coffee shop service. Does it start making a bit more sense now? The coffee shop has clearly defined, cohesive functionality it exposes – it makes coffee. The contract is pretty abstract and high level. It doesn’t concern itself with “implementation details”; what sort of coffee-machine and beans does the coffee shop have, how big it is, and what’s the name of the barista, and color of her shirt. You, as a user don’t care about those things, you only care about getting your cappuccino, so all the things that don’t directly impact you getting your coffee do not belong as part of the service.

Hopefully you’re getting a better picture of what it’s all about, and what makes a good service. Now back in .NET land we might define a coffee shop as an interface (since interfaces are by definition abstract and have no implementation details you’ll often find that your services will be defined as interfaces).

public interface ICoffeeShop
{
   Coffee GetCoffee(CoffeeRequest request);
}

The actual details obviously can vary, but it has all the important aspects. The service defined by our ICoffeeShop is high level. It defines all the aspects required to successfully order a coffee, and yet it doesn’t leak any details on who, how or where prepares the coffee.

If coffee is not your thing, you can find examples of good contracts in many areas of your codebase. IController in ASP.NET MVC, which defines all the details required by ASP.NET MVC framework to successfully plug your controller into its processing pipeline, yet gives you all the flexibility you need to implement the controller, whether you’re building a social networking site, or e-commerce application.

If that’s all clear and simple now, let’s move to the next important concept (in the next post).

Using ConventionTests

About conventions

I’m a big fan of using conventions when developing applications. I blogged about it in the past (here, here and later here). I also gave a talk about my experience with this approach and how I currently use it at NDC last week (slides are available here, video is here).

 

One problem I faced when trying to build convention validation tests was lack of simple API that would allow me to build the validation rules for my tests. I built a spike of such library (called Norman) last year. It was meant to be full-featured generic convention-validation engine, built on top of Mono.Cecil that would be able to validate rules inside method bodies (think rules like “no ViewModel can call data access code”). I quickly abandoned it, not being able to come up with simple and concise API that at the same time would be generic enough to cover every possible scenario.

 

About ConventionTests

Having failed with Norman’s approach on a later project I approached the problem from a completely different angle. This is based on observation regarding conventions I tend to use – they, in majority of cases, tend to be based on information about types, and internal models of frameworks I tend to use. In other words – information that can be easily obtained via plain Reflection.

Also limiting myself to very few scenarios allowed me to come up with a focused, simple and concise API. That’s how ConventionTests project was born.

 

Using ConventionTests

ConventionTests is a simple code-only Nuget that provides very minimalistic and limited API enforcing certain structure when writing convention tests and integrating with NUnit (more on that below). Installing it will add two .cs files to the project and a few dependencies (NUnit, Windsor and ApprovalTests).

image

ConventionTests.NUnit file is where all the relevant code is located and __Run file is the file that runs your tests (why that is, later).

The approach is to create a file per convention and name them in a descriptive manner, so that you can learn what the conventions you have in the project by just looking at the files in your Conventions folder, without having to open them.

 

Each convention test inherits (directly or indirectly from IConventionTest) interface. There’s an abstract implementation of the interface, ConventionTestBase and a few specialized implementations for common scenarios provided out of the box: Type-based one (ConventionTest) and two for Windsor (WindsorConventionTest, non-generic and generic for diagnostics-based tests).

 

Type-based convention tests

The most common and most generic group of conventions are ones based around types and type information. Conventions like “every controller’s name ends with ‘Controller’”, or “Every method on WCF service contracts must have OperationContractAttribute” are examples of such conventions.

You write them by creating a class inheriting ConventionTest, which forces you to override one method. Here’s a minimal example:

public class Controllers_have_Controller_suffix_in_type_name : ConventionTest
{
    protected override ConventionData SetUp()
    {
        return new ConventionData
            {
                Types = t => t.IsConcrete<IController>(),
                Must = t => t.Name.EndsWith("Controller")
            };
    }
}

Hopefully the code is self-explanatory (it was designed to be). You specify two predicates. One to find types that your convention is concerning, and the other specifying what your convention demands. If you now run tests in your project (or in __Run.cs file) You will see the result similar to this:

image

In this case I had a single class implementing IController, named Foo, which obviously doesn’t follow the convention that’s why the test failed, giving me generic failure message which just lists the types that failed the test.

To make it more readable I may want to make the test failure message more descriptive and actionable so that whoever sees a failing test like this will be able to know why their code is wrong and how to fix it.

protected override ConventionData SetUp()
{
    return new ConventionData
        {
            Types = t => t.IsConcrete<IController>(),
            Must = t => t.Name.EndsWith("Controller"),
            FailDescription = "Routing will not work unless controllers are named {something}Controller",
            FailItemDescription = t => t.ToString() + ". Name should be " + t.Name + "Controller perhaps?"
        };
}

The two additional properties will cause the failure to look like this:

image

This tells the developer who sees that test not only were the problem is, but also why it is a problem, and suggests a solution.

For scenarios where the convention is not all black and white and there are some legitimate exceptions to the rule there is built in integration with ApprovalTests framework (I blogged about ApprovalTests here). I know that the convention I used here as an example is pretty black and white but assuming it wasn’t changing the code to:

protected override ConventionData SetUp()
{
    return new ConventionData
        {
            Types = t => t.IsConcrete<IController>(),
            Must = t => t.Name.EndsWith("Controller"),
            FailDescription = "Routing will not work unless controllers are named {something}Controller",
            FailItemDescription = t => t.ToString() + ". Name should be " + t.Name + "Controller perhaps?"
        }.WithApprovedExceptions("Let's say Foo has special routing or something");
}

will change it to approval test where approved file will contain the output of the test, so in this case:

image

Windsor-based convention tests

Another common set of convention tests I tend to write are tests regarding my IoC container (and I tend to use Castle Windsor, therefore that’s the one supported out of the box). The structure of the tests and API is similar, with difference being instead of types we’re dealing with Windsor’s component Handlers.

public class List_classes_registered_in_Windsor : WindsorConventionTest
{
    protected override WindsorConventionData SetUp()
    {
        return new WindsorConventionData(new WindsorContainer()
                                                .Install(FromAssembly.Containing<AuditedAction>()))
            {
                FailDescription = "All Windsor components",
                FailItemDescription = h => BuildDetailedHandlerDescription(h)+" | "+
                    h.ComponentModel.GetLifestyleDescription(),
            }.WithApprovedExceptions("We just list all of them.");

    }
}

In this case we’re dealing with slightly different types but we follow the same pattern. We initialize the container and then specify predicates (if necessary) on its handlers. We don’t need to do that in which case all handlers from the container will be considered. This is useful to create (approval) test like this one that just lists all the components in the container, along with the list of services they expose and their lifestyle. With that you get quick insight into what’s in your container by just looking at the approved file. Moreover every change to the list of components will cause the test to fail, drawing your attention, to hopefully validate that the changes happening are indeed what you were expecting.

Windsor diagnostics-based convention tests

Windsor as some built in diagnostics and there’s a base class for tests that take advantage of that. For example the following test will list all potentially misconfigured components.

public class List_misconfigured_components_in_Windsor : 
    WindsorConventionTest<IPotentiallyMisconfiguredComponentsDiagnostic>
{
    protected override WindsorConventionData<IHandler> SetUp()
    {
        return
            new WindsorConventionData<IHandler>(
                new WindsorContainer().Install(FromAssembly.Containing<AuditedAction>()))
                {
                    FailDescription = "The following components come up as potentially unresolvable",
                    FailItemDescription = MisconfiguredComponentDescription
                }.WithApprovedExceptions("it's *potentially* for a reason");
    }
}

In this case instead of inspecting all handlers we go through handlers returned by the diagnostic (there’s another base class, with two generic parameters for diagnostics that do not deal with handlers). We make it an approval test so we can have an approved list of false-positives.

 

Those are the basic scenarios provided out of the box, however you can extend it to easily support more custom scenarios. For example, one case that happened on my current project just this week, was this. We’re using NHibernate and FluentNHibernate (by convention) and the default cascading on collections is set to “cascade all delete orphans”. This is not always the valid choice and for those cases we use mapping overrides.

 

Custom convention tests

We wanted to create a convention test (with approval) that lists all our collections where we do cascade deletes, so that when we add a new collection the test would fail reminding us of the issue, and forcing us to pay attention to how we structure relationships in the application. To do this we could create a base NHibernateConventionTest and NHiibernateConventionData to create similar structure, or just build a simple one-class convention like that:

public class List_collection_that_cascade_deletes:ConventionTestBase
{
    public override void Execute()
    {
        // NH Bootstrapper is our custom class to set up NH
        var bootstrapper = new NHibernateBootstrapper();
        var configuration = bootstrapper.BuildConfiguration();

        var message = new StringBuilder("Collections with cascade delete orphan");
        foreach (var @class in configuration.ClassMappings)
        {
            foreach (var property in @class.PropertyIterator)
            {
                if(property.CascadeStyle.HasOrphanDelete)
                {
                    message.AppendLine(@class.NodeName + "." + property.Name);
                }
            }
        }
        Approve(message.ToString());
    }
}

This is obviously much more imperative and much less readable than the other tests, therefore I suggest in most casese you make the effort and try to make the tests as terse, readable and declarative as possible so that they truly read like a good executable documentation.

 

And if you’re not sure how something works, remember the whole thing is a code-only nuget so you can just read the code.

 

Get it here.