Category: Tools

Using Bower and NancyFx together

In .NET land, for package management we’re pretty much all settled on using Nuget. It’s close to ubiquitous, which means, that pretty much any .NET open source (or not) library, can be found there.

There are a few most popular non-.NET packages too. This category contains mostly JavaScript/CSS libraries, which tend to be used in the front-end of projects using .NET on the backend.

While Nuget is great to help you get your foot in the water of the great and broad JavaScript ecosystem, you’ll soon find its suffering from a few drawbacks:

  • The JavaScript packages on Nuget only cover a small subset of the ecosystem
  • The packages on Nuget are usually unofficial, that is maintained by people not affiliated with the projects, and they can lag behind the official releases

Bower is to web (JavaScript/CSS) packages, what Nuget is to .NET.

On a recent project we used NancyFx to build out .NET backend services, and SPA client (based mostly on AngularJs). We used Nuget for our .NET packages and Bower for the client. This post shows how to set Bower up on Windows, and integrate it into a Nancy project.

Getting started

To get Bower you’ll need to have Node and Git installed. Also make sure both of them are in your PATH. Once this is done simply open your command prompt and type

npm i -g bower

After this finishes, you can type bower -v to confirm Bower is installed (and see the version)

Once this is done, let’s open Visual Studio and create a new Nancy Project (I used one of Nancy Visual Studio Templates)
Nancy Visual Studio Template
This will give you a simple starting website.
nancy_solution_structure
For static content, like the .css and .js files Bower manages, the convention in Nancy is to stick them in the /Content folder. (see the doco)

Let’s try using Bower to fetch Angular. Open your command prompt and make sure you’re in your solution directory.

Bower 101

There really are very few Bower commands you’ll need

bower seearch angular will find all matching packages (there are quite a few)
Bower serach results for angular
bower install angular will install the package (if you’re getting errors make sure Git is in your PATH)
bower install angular
You’ll notice, however that instead of Content the package is installed into bower_components

the .bowerrc file

We can change the default directory where Bower puts the packages by creating a .bowerrc file in our solution directory, and putting the following in it:

{
  "directory" : "Web/Content"
}

Save the file, remove the bower_components folder and let’s install the package again.

bower install angular again
Notice this time the package ended up where we told it to.

Bower is agnostic to Visual Studio, so it will not add the packages to your solution. You’ll need to select Show All Files in Solution Explorer, click on the angular folder and select Include in Project.
angular include in Solution
The reality is, it took you much longer to read this post, than it will take you to do the tasks described.

This is the approach I’ve taken and it seems to be working well for us. Do you have a different workflow? Let me know if the comments.

Using Resharper to ease mocking with NSubstitute

The problem

While C# compiler provides a decent level of generic parameter type inference there’s a bunch of scenarios where it does not work, and you have to specify the generic parameters explicitly.

One such case is when you invoke a generic method inline, and the return type is the generic type parameter.

In other words, the following scenario:

resharper_generic_call

In this case the compiler won’t infer the generic parameter of GenericMethod and you have to specify it explicitly.

As helpful as Resharper often is, it also doesn’t currently provide any help (other than placing the cursor at the appropriate location) and you have to specify the type parameter yourself.

Oh the typing!

I’m quite sensitive to friction in my development work, and I found this limitation quite annoying. Especially that the pattern is used quite often.

Specifically the excellent NSubstitute mocking framework is using it for creation of mocks via

Substitute.For<IMyTypeToMock>();

method.

If I wanted to use NSubstitute to provide the argument for my GenericMethod I’d have to do quite a lot of typing:

resharper_step1

type Su or few more characters to position completion on the Substitute class.

Press Enter.

resharper_step2

Type . and assuming the right overload of For is selected press Enter again (or press arrow to pick the right overload before that).

Finally, type enough of the expected type’s name for completion to select it, and press Enter to finish the statement.

resharper_step3

It may not look like much but if you’re writing a lot of tests those things add up. It ends up being just enough repetitive typing to break the flow of thoughts and make you concentrate on the irrelevant mechanics (how to create a mock for IFoo) rather than what your test is supposed to be doing (verifying behavior of UsesIFoo method when a non-null arg is passed).

Resharper to the rescue

While there’s no easy way to solve the general problem, we can use Resharper to help us solve it in this specific case with NSubstitute (or any other API, I’m using NSubstitute as an example here).

For that, we’ll write a Smart Template.

How to: writing a Smart Template

Go to Resharper menu, and pick Templates Explorer…

Make sure you’re on the Live Templates tab, and click New Template.

creating_template_1

For Shortcut select whatever you want to appear in your code completion for the template, for example NSub.

Select the template should be allowed in C# where expression is allowed.

Check Reformat and Shorten qualified references checkboxes and proceed to specifying the template itself, as follows:

NSubstitute.Substitute.For<$type$>()

The $type$ is a template variable, and the grid on the right hand side allows us to associate a macro with it. That’s where the magic happens.

Pick Change macro… and select Guess type expected at this point.

After you’ve done all of that, make sure you save the template.

creating_template_2

Result

Now your template will be available in the code completion.

resharper_step1a

resharper_step2a

All it takes to get to the end result now is to type ns, Enter, Enter.

The main benefit though is not the keystrokes you save, but the fact there’s almost no friction to the process, so you don’t get distracted from thinking about logic of your test.

Modularity is a feature

Stop me if you know this one. You find a library/framework that does something useful to you. You start using it and then realise it doesn’t work the way you want it in certain scenario or has a missing feature.

What do you do then?

  • Abandon the library and look for alternative that is more “feature rich”?
  • Ask the author to support your scenario/submit a pull request with the feature?

Those two, from my experience, are by far the most common approaches people take when faced with this situation.

There is another way

Surprisingly not many people take the third, most beneficial way, that is swapping part of the library for custom one, that is tailored for your needs.

Now, to be honest this has a prerequisite, that is, the library you’re using must be designed in a modular fashion, so that there are swappable parts. Most popular open source libraries however do a fairly good job at this. What this allows you to do, is to make specific assumptions and optimisations for your specific needs, ones that author of a generic library can never make, therefore having much more robust and targeted solution for your problems. Also being able to simply extend and/or swap parts of the library/framework means you don’t have to wait for a new version or waste time looking for and learning a different one only to discover at some later point that it also doesn’t support some scenario you’ve got.

I did just that today with RavenDB (or more specifically Json.NET library it uses internally). The application I’m working on needs to store object graphs from a third party vendor. Objects that weren’t designed with NoSQL storage in mind (or any storage in mind) and this was causing Json.NET some troubles. Not to bore you with the details, I was able to resolve the problem by swapping DefaultContractResolver with my own implementation that catered for quirks of the model we’re using, and in less than 20 lines of code I achieved something remarkable – I was able to store in RavenDB, with no issues, objects that were never meant to be stored in such a way. And all of that without authors of RavenDB or Json.NET having to do anything.

Consider the alternatives

This brings us to the main point of this post – modularity is a feature. It is one of the most important features of any reusable piece of code. Consider the alternatives. If you don’t allow for swapping parts of your code from generic one-size-fits-most solution to scenario specific variants you’re painting yourself into a corner, in one of two ways.

You are writing a very rigid piece of software, that unless used exactly in the way you anticipated, will be unfit for the task.

Alternatively as you discover new scenarios you can try to stretch your default implementation to support them all, adding more and more configuration flags to the API. In the end you will find that for every new scenario you add support for, two new get reported that you don’t support, all while metrics of your code complexity go through the roof and maintainability plummets.

giant-swiss-army-knife-thumb-400x316

So be smart. Whether you’re creating a library or using one, remember about modularity.

lego2

Testing framework is not just for writing… tests

Quick question – from the top of your head, without running the code, what is the result of:

var foo = -00.053200000m; 
var result = foo.ToString("##.##");

Or a different one:

var foo = "foo"; 
var bar = "bar"; 
var foobar = "foo" + "bar"; 
var concaternated = new StringBuilder(foo).Append(bar).ToString(); 

var result1 = AreEqual(foobar, concaternated); 
var result2 = Equals(foobar, concaternated);


public static bool AreEqual(object one, object two) 
{ 
    return one == two; 
}

How about this one from NHibernate?

var parent = session.Get<Parent>(1); 

DoSomething(parent.Child.Id); 

var result = NHibernateUtil.IsInitialized(parent.Child);

The point being?

Well, if you can answer all of the above without running the code, we’re hiring. I don’t, and I suspect most people don’t either. That’s fine. Question is – what are you going to do about it? What do you do when some 3rd party library, or part of standard library exhibits unexpected behaviour? How do you go about learning if what you think should happen, is really what does happen?

Scratchpad

I’ve seen people open up Visual Studio, create ConsoleApplication38, write some code using the API in question including plenty of Console.WriteLine along the way (curse whoever decided Client Profile should be the default for Console applications, switch to full .NET profile) compile, run and discard the code. And then repeat the process with ConsoleApplication39 next time.

 

The solution I’m using feels a bit more lightweight, and has worked for me well over the years. It is very simple – I leverage my existing test framework and test runner. I create an empty test fixture called Scratchpad.

scratchpad

scratchpad_fixture

This class gets committed to the VCS repository. That way every member of the team gets their own scratchpad to play with and validate their theories, ideas and assumptions. However, as the name implies, this all is a one-off throwaway code. After all, you don’t really need to test the BCL. One would hope Microsoft already did a good job at that.

If you’re using git, you can easily tell it not to track changes to the file, by running the following command (after you commit the file):

git update-index –assume-unchanged Scratchpad.cs

scratchpad_git

With this simple set up you will have quick place to validate your assumptions (and answer questions about API behaviour) with little friction.

scratchpad_test

So there you have it, a new, useful technique in your toolbelt.

Approval testing – value for the money

I am a believer in the value of testing. However not all tests are equal, and actually not all tests provide value at all. Raise your hand if you’ve ever seen (unit) tests that tested every corner case of trivial piece of code that’s used once in a blue moon in an obscure part of the system. Raise your other hand if that test code was not written by human but generated.

 

As with any type of code, test code is a liability. It takes time to write it, and then it takes even more time to read it and maintain it. Considering time is money, rather then blindly unit testing everything we need to constantly ask ourselves how do we get the best value for the money – what’s the best way to spend time writing code, to write the least amount of it, to best cover the widest range of possible failures in the most maintainable fashion.

Notice we’re optimising quite a few variables here. We don’t want to blindly write plenty of code, we don’t want to write sloppy code, and we want the test code to properly fulfil its role as our safety net, alarming us early when things are about to go belly up.

Testing conventions

What many people seem to find challenging to test is conventions in their code. When all you have is a hammer (unit testing) it’s hard to hit a nail, that not only isn’t really a nail, but isn’t really explicitly there to being with. To make matters worse the compiler is not going to help you really either. How would it know that LoginController not implementing IController is a problem? How would it know that the new dependency you introduced onto the controller is not registered in your IoC container? How would it know that the public method on your NHibernate entity needs to be virtual?

 

In some cases the tool you’re using will provide some level of validation itself. NHibernate knows the methods ought to be virtual and will give you quite good exception message when you set it up. You can verify that quite easily in a simple test. Not everything is so black and white however. One of diagnostics provided by Castle Windsor is called “Potentially misconfigured components”. Notice the vagueness of the first word. They might be misconfigured, but not necessarily are – it all depends on how you’re using them and the tool itself cannot know that. How do you test that efficiently?

Enter approval testing

One possible solution to that, which we’ve been quite successfully using on my current project is approval testing. The concept is very simple. You write a test that runs producing an output. Then the output is reviewed by someone, and assuming it’s correct, it’s marked as approved and committed to the VCS repository. On subsequent runs the output is generated again, and compared against approved version. If they are different the test fails, at which point someone needs to review the change and either mark the new version as approved (when the change is legitimate) or fix the code, if the change is a bug.

 

If the explanation above seems dry and abstract let’s go through an example. Windsor 3 introduced way to programmatically access its diagnostics. We can therefore write a test looking through the potentially misconfigured components, so that we get notified if something on the list changes. I’ll be using ApprovalTests library for that.

[Test] 
pub­lic void Approved_potentially_misconfigured_components() 
{ 
    var con­tainer = new Wind­sor­Con­tainer(); 
    container.Install(FromAssembly.Containing<HomeController>());

    var han­dlers = GetPotentiallyMisconfiguredComponents(container); 
    var mes­sage = new String­Builder(); 
    var inspec­tor = new DependencyInspector(message); 
    fore­ach (IEx­poseDe­pen­den­cy­Info han­dler in han­dlers) 
    { 
        handler.ObtainDependencyDetails(inspector); 
    } 
    Approvals.Approve(message.ToString()); 
}

pri­vate sta­tic IHan­dler[] GetPotentiallyMisconfiguredComponents(WindsorContainer con­tainer) 
{ 
    var host = container.Kernel.GetSubSystem(SubSystemConstants.DiagnosticsKey) as IDi­ag­nos­tic­sHost; 
    var diag­nos­tic = host.GetDiagnostic<IPotentiallyMisconfiguredComponentsDiagnostic>(); 
    var han­dlers = diagnostic.Inspect(); 
    return han­dlers; 
}

What’s important here is we’re setting up the container, getting the misconfigured components out of it, produce readable output from the list and passing it down to the approval framework to do the rest of the job.

Now if you’ve set up the framework to pup-up a diff tool when the approval fails you will be greeted with something like this:

approval_diff

You have all the power of your diff tool to inspect the change. In this case we have one new misconfigured component (HomeController) which has a new parameter, appropriately named missingParameter that the container doesn’t know how to provide to it. Now you either slap yourself in the forehead and fix the issue, if that really is an issue, or approve that dependency, by copying the diff chunk from the left pane to the right, approved pane. By doing the latter you’re notifying the testing framework and your teammates that you do know what’s going on and you know it’s not an issue the way things are going to work. Coupled with a sensible commit message explaining why you chose to approve this difference you get a pretty good trail of exception to the rule and reasons behind them.

 

That’s quite an elegant approach to a quite hard problem. We’re using it for quite a few things, and it’s been giving us really good value for little effort it took to write those tests, and maintain them as we keep developing the app, and the approved files change.

 

So there you have it, a new, useful tool in your toolbelt.

Connector: Simple, zero friction Github –> AppHarbor integration

Recently to play with some new technology I came up with an idea to build an integration layer between Github and AppHarbor. What that means, is give you ability to work with your Github repository, reaping benefits of all of it’s VCS-centric features, and automatically, continuously deploy your code to AppHarbor.

The actual scenario I had in mind is to be able to use that for deployment of Open Source projects. AppHarbor is fantastic, no-headache deployment in the cloud, but Github is perfect for keeping and developing your code in the open, in social way. To have the cake and eat it too, Connector was born.

Connector

I hope you find it useful. it is free, use-at-your-own-risk-and-don’t-sue-me-if-something-breaks software. There’s still some work to be done, feature-wise and a whole lot of polishing but I decided to announce it early and get early feedback. If you have any suggestions, ideas or (gulp) bugs, let me know!

link: http://connector.apphb.com/

Hope that helps.

Simple guide to running Git server on Windows, in local network (kind of)

Last year I found myself in a sudden and quick need to set up working environment for a team of four, and as I like Git very much, I wanted to use it as our VCS. The problem was, we weren’t allowed to use any third party provider, so GitHub was off the table. As I searched the Internet there were a few guides to set up team Git environment on Windows, but they all seemed very complicated and time consuming. For our modest needs we experimented a little and came up with a solution that was very simple, didn’t require any additional software to be installed anywhere and worked like a charm.

Recently I used it again on my current engagement, and one of my colleagues suggested I should blog it, so here goes.

Ready, steady, go

The guide assumes you already have your local Git set up. For that, there are plenty of resources on the Internet, including my own blogpost about Windows Git tooling.

The entire tricks works like this – expose folder containing your shared Git repository as Windows network share.

Step one – bare git repository

There are two twists to the entire solution – one of them is – your shared repository needs to be initialized with –bare flag.

git_bare_repository

Step two – Windows share

Second step is to expose the folder with our newly created repository on the Windows share. You also use your standard Windows mechanisms to control and limit access to the folder (make sure you give the developers write access!).

Step three – Map the share as network drive

This step is perhaps not exactly necessary but I couldn’t get it to work otherwise, so here comes the second twist. In order for your developers to be able to access the shared folder via Git they need to map it as network drive.

sshot-10

Step four – Add remote repository in Git and code away

Last step is the standard Git procedure – every developer on your team needs to add the repository sitting under their newly created network drive as remote. Notice the use of “file:///” prefix in front of the mapped drive name.

sshot-11

Step five

That’s all. I hope you find it useful, and if you know a way to eliminate step three, let me know in the comments.

Tests and issue trackers – how to manage the integration

In highly disciplined teams when a bug is discovered the following happens:

  • test (or most likely a set of tests) is written that fails exposing the bug
  • a ticket in issue tracking system is created
  • a developer fixes the bug, runs all the tests including the new ones and if everything is green, the ticket is resolved

My question is what if some time later (say several weeks after) a developer wants to find which issue relates to the tests he’s looking at, or which tests were documenting the bug he’s looking at. How do you manage the link between the tests and issues in your tracker?

Solution one – file per ticket

Quite common solution to this problem is to have a special folder in your test project and for each ticket have a file named after the ticket’s ID like this:

Tests solution

That works and is quite easy to discover. However the downside of this solution is that it introduces fragmentation. If a bug was found in a shipping module does it really make sense to keep some tests for shipping module in ShippingModuleTests class and some other in CRM_4 class merely because the latter ones were discovered by a tester and not original developer?

Solution two – ticket id in method name

To alleviate that another solution is often used. The tests end up in ShippingModuleTests bug the id of the issue is encoded in the name of the test method. like this:

[Test]
public void CRM_4_Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test]
public void CRM_4_Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

That’s a step in the right direction. It makes the link explicit and you can quickly navigate the relation either direction. However I don’t like it very much, because most of the time I couldn’t care less about the fact that this test documents a long fixed bug. Yet I am constantly reminded about it every time I run my tests.

tests

Solution three – description

The solution I found myself using the most recently is to leverage the description most testing frameworks let you associate with your tests.

[Test(Description = "CRM_4")]
public void Gold_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

[Test(Description = "CRM_4")]
public void Silver_preferred_customer_should_have_his_bonus_applied_to_net_price()
{
   //some logic here
}

This still makes the association explicit and searchable, but doesn’t remind me of it constantly where I don’t care.

tests

What about you? What approach do you employ to manage that?

Git tooling for .NET developers

Many developers working on Windows stay away from Git. There are many reasons for this but from my observations and discussions I’ve had, the most common can be summarized by this tweet by my friend Paul Stovell:

 

can everyone please stop using Git? Mercurial has a better UI (TortoiseHG), I'm sick of Git UI's

I’m not getting into holy wars, and I’m not trying to convince anyone that Git is better than any other VCS. Instead I’ll walk you through the tooling I use to interact with Git on Windows, with Visual Studio.

Git Extensions

First thing you should be getting is Git Extensions.

Git Extensions context menu

With that, similar to TortoiseX family of tools you get nice context menu that gives you access to most common operations quickly, via GUI, and with no need to memorize command line options if you want to avoid it. You can also launch Git command line in the selected folder and then all power of git is at your disposal.

Visual Studio

If you’re a .NET developer, you’ll want to work from within Visual Studio. I’m sure you’ll be happy to learn that Git Extensions has a really nice Visual Studio integration as well.

git_extensions_visual_studio

You get two things – a menu with all the options that Explorer context menu gives you and more, including ability to edit .gitignore file (also the tool will generate a new .gitignore file for you with Visual Studio specific rules!) and to launch a Git bash. Also you get a Git toolbar with most commonly used commands: Commit, browse, push, pull, stash and access to settings.

The way I usually work with it, is I use Git bash for most operations. There’s one exception to that rule though – committing.

git_extensions_commit_window

I think Git Extensions’s Commit window is the best of all VCS I’ve worked with. It clearly separates files you want to commit (in your Index) and files you leave out for now. It clearly shows you status of each file (new, deleted, modified) with distinctive, large icons, also it shows you an on-the-fly diff of what changed in any given file, and it’s blazing fast. Mostly the readability benefits are why I stick to the UI for this operation.

Visual Studio Git Source Control Provider

In addition to Git Extensions I use another tool called Git Source Control Provider which plugs into standard Visual Studio VCS provider mechanism to give me some additional functionality (you can get the tool via Visual Studio extension manager).

git_source_control_provider

There are a few useful capabilities provided by this tool that I tend to rely on quite often (there are more than that as you can read on the tool’s page):

  1. Overlay icons showing you status of each file in Solution Explorer.
  2. It shows you name of the current branch in the Solution Explorer bar at the top (see “(master)” on the screenshot below), and you will work a lot with branches in Git.
  3. It gives some additional options in the context menu.

git_source_control_provider_screenshot

This (plus command line) makes the job very, very simple and quick, and that’s what I stick to on my machine.

There’s one more thing that makes working with Git a pleasure (especially if you’re working on a team that’s not completely co-located).

Github

I love Github. It has a very clean, simple interface that makes going through project history, diffing commits and code reviews a very simple and frictionless process.

 

Summary

Yes, perhaps those tools lack some eye candy that other tools have but frankly – I don’t care, and neither should you. They are more than enough to let you quickly do whatever you need to do with your code and don’t stand in your way. And that’s what a good VCS and tooling around it should be – something you don’t really have to think about and you can rely on to keep track of what is happening with your code with confidence. And that’s precisely what Git is – so if you’ve been held back, go ahead, install those tools and give Git a shot – you won’t look back.

My Visual Studio (with ReSharper) color settings

That’s a post I’ve been meaning to write for quite a long time, yet I never got around to really doing it. Now, here it is – my Visual Studio color theme and it looks like this:

vs_scheme

The theme works with Visual Studio 2010 and requires Resharper 5.1 or newer with “Color Identifiers” option turned on.

Some things to notice.

Instance classes, static classes and interfaces have all very similar but slightly different shade of yellow/green.

Extension methods are slightly lighter than regular methods.

Mutable locals, immutable locals and constants have also similar but slightly different color.

The background is dark, but not completely black.

 

Hope you enjoy it. If you do – you can grab it here.