Category: .NET

Git tooling for .NET developers

Many developers working on Windows stay away from Git. There are many reasons for this but from my observations and discussions I’ve had, the most common can be summarized by this tweet by my friend Paul Stovell:

 

can everyone please stop using Git? Mercurial has a better UI (TortoiseHG), I'm sick of Git UI's

I’m not getting into holy wars, and I’m not trying to convince anyone that Git is better than any other VCS. Instead I’ll walk you through the tooling I use to interact with Git on Windows, with Visual Studio.

Git Extensions

First thing you should be getting is Git Extensions.

Git Extensions context menu

With that, similar to TortoiseX family of tools you get nice context menu that gives you access to most common operations quickly, via GUI, and with no need to memorize command line options if you want to avoid it. You can also launch Git command line in the selected folder and then all power of git is at your disposal.

Visual Studio

If you’re a .NET developer, you’ll want to work from within Visual Studio. I’m sure you’ll be happy to learn that Git Extensions has a really nice Visual Studio integration as well.

git_extensions_visual_studio

You get two things – a menu with all the options that Explorer context menu gives you and more, including ability to edit .gitignore file (also the tool will generate a new .gitignore file for you with Visual Studio specific rules!) and to launch a Git bash. Also you get a Git toolbar with most commonly used commands: Commit, browse, push, pull, stash and access to settings.

The way I usually work with it, is I use Git bash for most operations. There’s one exception to that rule though – committing.

git_extensions_commit_window

I think Git Extensions’s Commit window is the best of all VCS I’ve worked with. It clearly separates files you want to commit (in your Index) and files you leave out for now. It clearly shows you status of each file (new, deleted, modified) with distinctive, large icons, also it shows you an on-the-fly diff of what changed in any given file, and it’s blazing fast. Mostly the readability benefits are why I stick to the UI for this operation.

Visual Studio Git Source Control Provider

In addition to Git Extensions I use another tool called Git Source Control Provider which plugs into standard Visual Studio VCS provider mechanism to give me some additional functionality (you can get the tool via Visual Studio extension manager).

git_source_control_provider

There are a few useful capabilities provided by this tool that I tend to rely on quite often (there are more than that as you can read on the tool’s page):

  1. Overlay icons showing you status of each file in Solution Explorer.
  2. It shows you name of the current branch in the Solution Explorer bar at the top (see “(master)” on the screenshot below), and you will work a lot with branches in Git.
  3. It gives some additional options in the context menu.

git_source_control_provider_screenshot

This (plus command line) makes the job very, very simple and quick, and that’s what I stick to on my machine.

There’s one more thing that makes working with Git a pleasure (especially if you’re working on a team that’s not completely co-located).

Github

I love Github. It has a very clean, simple interface that makes going through project history, diffing commits and code reviews a very simple and frictionless process.

 

Summary

Yes, perhaps those tools lack some eye candy that other tools have but frankly – I don’t care, and neither should you. They are more than enough to let you quickly do whatever you need to do with your code and don’t stand in your way. And that’s what a good VCS and tooling around it should be – something you don’t really have to think about and you can rely on to keep track of what is happening with your code with confidence. And that’s precisely what Git is – so if you’ve been held back, go ahead, install those tools and give Git a shot – you won’t look back.

Lock-free thread safe initialisation in C#

I used to do quite a lot of concurrent programming early in my career. Recently I’ve been working again on optimising a large piece of code to behave better when ran in concurrent environment and I must share a secret with you – I had a great fun. It all started with a semi-serious challenge from a friend, and ended up consuming big part of my Christmas-new year break.

One of the problems I found, was that a group of objects that was being used on multiple threads had to be initialised in a thread safe manner – that is the initialisation code had to be ran just once, atomically, on a single thread. However afterwards it could be safely executed concurrently on multiple threads. This was being accomplished by using code similar to following:

public void Foo GetInitializedFoo()
{
   if(this.fooIsInitialized) return foo;
   lock(lockObject)
   {
      if(this.fooIsInitialized) return foo;
      
      InitializeFoo();
      return foo;
   }
}

As you can see the code usually works lock-free, and only the first time when foo is not initialised it has to lock to perform the initialisation algorithm. The reason that was suboptimal is that in heavily multi-threaded scenarios it yielded suboptimal performance when multiple threads were trying to initialise foo.

Imagine there are four threads. First of them succeeds in obtaining the lock, and goes ahead to perform the initialisation, while the other two go into the waiting queue and sleep there waiting for the lock to be freed. The first thread finishes the initialisation pretty soon, but by the time the first thread wakes up already quite a lot of time is wasted. Then the second thread obtains the lock, but the remaining two, even though the variable is already initialised and therefore safe to be read concurrently, still wait in the lock queue. The three remaining threads could execute all at once now, yet they will be woken up and executed each in turn.

Mad scientist solution

The problem was that using lock proved to be suboptimal for the problem at hand. To try to address this (and to make my inner geek happy) I decided to look at making this code perform better.

While .NET framework library has lots of synchronisation primitives none of them really seemed to be any better suited for the job then good old Monitor. I didn’t want to put waiting threads to sleep immediately, as this is quite an expensive operation (and so is waking them up back again), and the initialisation would usually be performed quite quickly. I also wanted all the waiting threads to be able to execute concurrently with no synchronisation after the first one of them finished initialisation. To accomplish this I came up with the following solution:

public sealed class ThreadSafeInit
{
	// We are free to use negative values here, as ManagedTreadId is guaranteed to be always > 0
	// http://www.netframeworkdev.com/net-base-class-library/threadmanagedthreadid-18626.shtml
	// the ids can be recycled as mentioned, but that does not affect us since at any given point in time
	// there can be no two threads with the same managed id, and that's all we care about
	private const int Initialized = int.MinValue + 1;
	private const int NotInitialized = int.MinValue;
	private int state = NotInitialized;

	public void EndThreadSafeOnceSection()
	{
		if (state == Initialized)
		{
			return;
		}
		if (state == Thread.CurrentThread.ManagedThreadId)
		{
			state = Initialized;
		}
	}

	public bool ExecuteThreadSafeOnce()
	{
		if (state == Initialized)
		{
			return false;
		}
		var inProgressByThisThread = Thread.CurrentThread.ManagedThreadId;
		var preexistingState = Interlocked.CompareExchange(ref state, inProgressByThisThread, NotInitialized);
		if (preexistingState == NotInitialized)
		{
			return true;
		}
		if (preexistingState == Initialized || preexistingState == inProgressByThisThread)
		{
			return false;
		}

		var spinWait = new SpinWait();
		while (state != Initialized)
		{
			spinWait.SpinOnce();
		}
		return false;
	}
}

The ThreadSafeInit class manages lock-free access to part of codebase. It has a state int flag which starts off in a non-initialised state. Then as ExecuteThreadSafeOnce method, which marks the beginning of the initialisation section is being called, it uses Interlocked.CompareExchange to set the flag in a lock-free thread safe manned to the id of current thread. The thread that succeeds returns true and is free to go ahead and do the initialisation. Any other thread waits for the first one to finish, but instead of going immediately to sleep it spins for a while, effectively wasting cycles, and then once the first thread finishes, and calls EndThreadSafeOnceSection method, all the other threads can proceed in parallel.

Usage pattern looks something like this:

public void Foo GetInitializedFoo()
{
   if(this.fooIsInitialized) return foo;
   var initialising = false;
   try
   {
      initializing = init.ExecuteThreadSafeOnce();
      if(this.fooIsInitialized) return foo;
      
      InitializeFoo();
      return foo;
   }
   finally
   {
      if(initialising)
      {
         init.EndThreadSafeOnceSection();
      }
   }
}

Using this code I was able to measure over two-fold improvement over the old solution in this part of code. Sure, it’s a micro-optimisation and in 99.9% of cases lock would be just fine, but as I said, it was a challenge, and I had a blast working on this, and that’s all I care about. Well, not all perhaps. If you see any flaws in this code, ways to improve it, or I just replicated something that someone else did much better long time ago, let me know.

C# dynamic and generics dispatch

I remember when C# 3 came out, I immediately started using pretty much all of the new features in it and I loved it. Imagine developing without var, automatic properties, lambdas, extension methods including those in .Linq namespace.

With C# 4 it’s been different. Even though it’s been several months since I started using Visual Studio 2010 and .NET 4, except for generic variance and occasional use of named arguments I hardly use any of the new features. Especially, I couldn’t find a good use case for the most hyped new feature – the dynamic keyword.

Until today, that is.

Short introduction to the problem

The application I’m working on right now, has forms (you wouldn’t expect that, would you?). Each of them collects some data in steps and based on answers from previous step, its own state etc, it decides whether or not it needs to collect more info or “does stuff”.

So the C# 3 version…

Each form (questionnaire) implements a common interface and various (multiple!) closed version of IHandleData<TData> interface for each type of data it collects from the user. The solution is very loosely generic, so the code that wires those things together does not know any details about the types it’s working with. All it knows is that it gets two objects – one of them is the data type, and the other implements generic IHandleData closed over the former type. It then has to somehow invoke the right one (of many) Handle method on the first object, passing the second one as the argument.

To handle this, we came up with code resembling the following (this code is simplified – the actual solution is a tad more elaborate):

var viewType = data.GetType();
var closedType = typeof(IHandleData<>).MakeGenericType(viewType);
var handle = closedType.GetMethod("Handle");
handle.Invoke(questionnaire, new[] { data });

There’s quite a lot of ceremony here, to work around the fact that at compile time we don’t have enough type information (and that type information are different from invocation to invocation) to satisfy the compiler.

Then again in C# 4…

In situations like this, involving generics and dynamic dispatch, you can leverage the new C# 4 capabilities, to make the code much neater and much more concise:

dynamic data = GetData();
dynamic questionaire = GetQuestionaire();
questionaire.Handle(data);

The additional benefit to vastly improved readability of the code is better performance. A useful thing to keep on the back of your head.

To IntelliSense or not to IntelliSense, is that even a question?

UPDATE (tl;dr version):

As it appears people seem not to read the post before commenting. I’m not questioning Intellisense’s value. I’m questioning why .NET developers get panic attack when someone mentions “let’s use X” where there’s no IntelliSense support for X.

As ASP.NET MVC team lately pushes towards third release of their framework including new out of the box view engine, more and more I hear people complaining about the current preview release of it, for one simple reason – it does not have IntelliSense (yet).

This reminds me of similar approach people had for some other view engines like nhaml, NVelocity, BRail both in ASP.NET MVC and Monorail. This can be also extended to not using Boo which is an awesome .NET language that should get everyone excited, but it’s not – because there’s not good tooling story around it Visual Studio (yes, I know about the efforts to bring that to some degree).

So what’s the problem?

I ask you. Ruby community, which seems to be the fastest growing development community at the moment uses what most .NET or Java developers would call “primitive tools”, with hardly even syntax coloring, yet what they seem to be emphasizing the most when they compare Ruby to other platforms is productivity gains.

So is refusal to use something unless it provides IntelliSense just an inner fear in .NET developers? I would envision it to experience children have when they learnt how to ride a bike with additional wheels on each side, and now the additional wheels were taken away.

Isn’t this behavior as irrational?

 

[#Beginning of Shooting Data Section]
Nikon D2Xs
2009/07/03 11:30:57.3
Tiff-RGB (8-bit)
Image Size: Large (4288 x 2848)
Color
Lens: 24-70 mm F/2.8 G
Focal Length: 58 mm
Exposure Mode: Manual
Metering Mode: Multi-Pattern
1/180 s - F/11
Exposure Comp.: 0 EV
Sensitivity: ISO 100
Optimize Image:
White Balance: Color Temp. (4800K)
Focus Mode: AF-S
VR Control: OFF
Long Exposure NR: OFF
High ISO NR: Off
Color Mode: Mode II (Adobe RGB)
Tone Comp.: Normal
Hue Adjustment: 0¡
Saturation: Normal
Sharpening: Medium high
Flash Mode:  
Flash Exposure Comp.:  
Flash Sync Mode:  
Image Authentication: OFF
Image Comment: (C)2008 HALL IMAGE PHOTOGRAPHY      
[#End of Shooting Data Section]

ConcurrentDictionary in .NET 4, not what you would expect

.NET 4 brings some awesome support for multithreaded programming as part of Task Parallel Library, Concurrent collections and few other types that live now in the framework.

Not all behaviour they expose is… not unexpected thought. What would be the result of the following code:

        
private static void ConcurrentDictionary()
{
	var dict = new ConcurrentDictionary<int, string>();
	ThreadPool.QueueUserWorkItem(LongGetOrAdd(dict, 1));
	ThreadPool.QueueUserWorkItem(LongGetOrAdd(dict, 1));
}
 
private static WaitCallback LongGetOrAdd(ConcurrentDictionary<int, string> dict, int index)
{
	return o => dict.GetOrAdd(index,i =>
										{
											Console.WriteLine("Adding!");
											Thread.SpinWait(1000);
 
											return i.ToString();
										}
					);
}

We call GetOrAdd method from ConcurrentDictionary<, > on two threads trying to concurrently obtain an item from the dictionary, creating it using provided delegate in case it’s not yet present in the collection. What would be the result of running the code?

Surprisingly both delegates will be invoked, so that we’ll see the message “Adding!” being printed twice, not once. This unfortunately makes the class unusable for a whole range of scenarios where you must ensure that creation logic is only ran once. One scenario where I wished I could use this class was Castle Dynamic Proxy and its proxy type caching. Currently it uses coarse grained double checked locking. The code could benefit from more elegant API like the one exposed by ConcurrentDictionary and fine grained locking.

However looks like ConcurrentDictionary is not an option, not unless coupled with Lazy<>, which while I suppose would work, is far less elegant and would not perform so good, since both classes would use their own, separate locking, so we’d end up using two locks instead of one, and having to go through additional, unnecessary layer of indirection.

So the message of the day is – new concurrent classes in .NET 4 are awesome but watch out for issues anyway.

Select is broken? (.NET 4)

Daniel pinged me today that he stumbled upon odd issue while trying to update Moq to use Castle DynamicProxy 2.2. I investigate a bit more and it appears to be one of these this-cannot-be-happening-select-is-broken situations.

When DynamicProxy tries to replicate an attribute on one method, its CustomAttributeData contains contradictory information. It happens only when running on .NET 4 (the method in question does not have the attribute in previous versions of BCL)

Select is broken

That’s the method in question in Reflector:

Reflector

Here’s simplified code sample that reproduces the issue:

var method = typeof(HtmlInputText).GetMethod("GetDesignModeState", BindingFlags.Instance | BindingFlags.NonPublic);

Debug.Assert(method != null, "method != null");

var attributes = CustomAttributeData.GetCustomAttributes(method);

Debug.Assert(attributes.Count == 1, "attributes.Length == 1");

var attribute = attributes[0];

Debug.Assert(attribute.Constructor.GetParameters().Length == attribute.ConstructorArguments.Count, "This fails");

 

Now – what am I missing? Is this really bug in BCL v4 or am I doing something wrong here?

Validate your conventions

I’m a big proponent of the whole Convention over configuration idea. I give up some meticulous control over plumbing of my code, and by virtue of adhering to conventions the code just finds its way into the right place. With great power comes great responsibility, as someone once said. You may give up direct control over the process, but you should still somehow validate your code adheres to the conventions, to avoid often subtle bugs.

So what is it about?

Say you have web, MVC application, with controllers, views and such. You say controllers have a layer supertype, common base namespace and common suffix in class name. Great, but what happens if you (or the other guy on your team), fail to conform to these rules? There’s no compiler to raise a warning, since the compiler has no idea about the rules you, (or someone else) came up with. What happens is, most likely your IoC container if you use one, or perhaps routing engine of your application will fail to properly locate the type, and the application will blow into your face during a demo for very important customer.
Take another scenario. Say you’re using AutoMapper, and you came up with convention that entities in namespace Foo.Entities are mapped to DTOs in namespace Foo.Dtos, so that Foo.Entities.Bar gets its mapping to Foo.Dtos.BarDto registered automatically. Again – it won’t pick mapping to FizzbuzzDto when you accidentally put Fizzbuzz entity in Foo.Services. Depending on your test coverage you will either find out sooner or later.

These are usually a hard rules, and both AutoMapper and most IoC frameworks will be able to figure out something is missing and notify you (with an exception). This is a good thing, since you fail fast, and quickly get to fix your rules. You won’t be that lucky all the time though. Say you’re writing an MVVM framework, similar to one Rob built, and create  a convention wiring enable state of some controls bound to method Execute, with property CanExecute on your view model (if you have no idea what the heck I’m talking about, go see Rob’s presentation from Mix10). When you fail to conform to this rule, no exception will occur, world will not end. Merely a  button will be clickable all the time, instead of only when some validity rule is met. This is a bug though, just one harder to spot, which means usually it won’t be you who finds it – your angry users will.

So I have a problem?

So what to do? You can loosen your conventions. If you had an issue due to namespace mismatch, perhaps don’t require namespaces to match, just type names? Or if you forgot to put a Controller suffix on your controller, just ditch that requirement and it will work just fine, right?

Yes it will (probably), or at least you find yourself breaking some other requirement of your convention and feel the need to shed it as well. This however does not fix the problem – it merely replaces it with another – you will have rotten code.

Problem is not that your convention was too rigid (although sometimes this will indeed be the case as well), you not adhering to the convention was the real problem. Conventions are like compiler rules – follow them or don’t consider your code ‘working’. To get compiler-like behavior you need to take this one step further and along with finding code adhering to your conventions, find the code that does not.

Take the AutoMapper example again. We have designated namespace for our convention – Foo.Dtos. For each type in this namespace we require that a matching type in Foo.Entities namespace exists. You should check if an unmatched dto type exists and send some notification in case there is. Also since you require DTOs and only DTOs to live in that namespace you should express that requirement in code – check for types with Dto suffix in other namespaces, and types without this suffix in Foo.Dtos namespace, and in each case send an appropriate notification as well.

This is a form of executable specification. When a new guy/gal comes to your team and starts working on the code base you don’t have to worry about them breaking the code because they don’t know the conventions you follow. The code will tell them when something is wrong, and what to do to make it right. It will also help you keep your code clean and structured. Improving its readability and in the long run, also decreasing possibility of code rot.

What to do?

I still haven’t touched upon one important thing – how and where do you implement the convention validation rules. There are three places I would put convention validation.

  • The bootstrapping code itself. It often makes sense to take advantage of the fact you’re scanning for conventions to perform the conventions bootstrapping and do check for misses as well. Especially when the framework you’re working with does not have any support for conventions built in and you’re doing it all manually.
  • Unit tests. Add tests that call AutoMapper’s ‘AssertConfigurationIsValid’ or in case of homegrown framework, reflect over your model and look for deviations from your conventions. It really is very little work upfront compared to massive return it gives you.
  • NDepend. NDepend’s CQL rules lend themselves quite nicely to this task. They work in a similar manner to unit tests, but they are external to your code. Especially with very nice Visual Studio integration in version 3, it has very low friction, and provides immediate feedback. In addition they are pretty easy to read, lending themselves to the task of being executable documentation.

Solving a programming puzzle

My fellow devlicio.us blogger Tim Barcz posted an interesting problem to solve. I think it’s fun, so I decided to give it a try. Now, I don’t know enough context to solve it for just about any case (and I don’t think that’s even possible), so I made a few assumptions.

  • The strings we’re gonna be handling are not too long. Otherwise I’d probably use a StringReader/Writer.
  • I’m looking at minimal solution. YAGNI – I could do some optimizations probably, but I want to keep it simple, because the sample problem (just a handful of characters) is simple.
  • Solution is not context specific. In other words, if you want to swap characters differently based on context, it won’t do it. Again, I’m keeping it simple.
  • I’m using a factory here, to create the cleaner and feed it with mapping but based on the needs and tools available, I’d probably use an IoC container to get the instance, and if needed get the mapping from an external file, where it’s more readable, and can be easily changed.

Now, here’s the code:

public string CleanInput( string input )
{
    StringCleaner cleaner = CleanerFactory.BuildCleaner( input );
    return cleaner.Clean( input );
}

The CleanerFactory is not important. All it does is construct an instance of a StringCleaner and feed it with the mapping. However, I’ll post it for completeness:

public static class CleanerFactory
{
    public static StringCleaner BuildCleaner( string input )
    {
 
        var cleaner = new StringCleaner();
        BuildMapping( cleaner );
        return cleaner;
    }
 
    private static void BuildMapping( StringCleaner cleaner )
    {
        /*
         *  Character         Correct ASCII Value     Bad Values
            Hyphen             45                         8208, 8211, 8212, 8722, 173, 8209, 8259
            Single Quote     39                         96, 8216, 8217, 8242, 769, 768
            Double Quote     34                         8220, 8221, 8243, 12291
            Space             32                         160, 8195, 8194
         */
        cleaner.AddMapping( (char) 45, (char) 8208, (char) 8211, (char) 8212, (char) 8722, (char) 173, (char) 8209, (char) 8259 );
        cleaner.AddMapping( (char) 39, (char) 96, (char) 8216, (char) 8217, (char) 8242, (char) 769, (char) 768 );
        cleaner.AddMapping( (char) 34, (char) 8220, (char) 8221, (char) 8243, (char) 12291 );
        cleaner.AddMapping( (char) 32, (char) 160, (char) 8195, (char) 8194 );
    }
}

The cleaner itself looks like this:

public class StringCleaner
{
        private IDictionary<char,char> mappings = new Dictionary<char, char>();
 
        public string Clean( string input )
        {
            if( string.IsNullOrEmpty( input ) )
            {
                return string.Empty;
            }
 
            var buffer = new StringBuilder( input );
            for( int i = 0; i < buffer.Length; i++ )
            {
                var character = buffer[i];
                char validCharacter;
                if( this.mappings.TryGetValue( character, out validCharacter ) )
                {
                    buffer[ i ] = validCharacter;
                }
            }
 
            return buffer.ToString();
        }
 
        public void AddMapping( char validValue, params char[] invalidValues)
        {
            if( invalidValues == null )
            {
                throw new ArgumentNullException( "invalidValues" );
            }
 
            foreach( var invalidValue in invalidValues )
            {
                this.mappings[ invalidValue ] = validValue;
            }
        }
}

Since strings are immutable, I’m using a StringBuilder to iterate over all the characters in the string, and replace these I want. I’m using Dictionary for the mapping, because it has a readable API, and O(1) search time.

Framework Tips XII: Advanced .NET delegates

All .NET delegates inherit (indirectly) from System.Delegate class (and directly from System.MulticastDelegate) which has a static CreateDelegate method. The method has two powerful characteristics that are not widely known.

All delegate types are implemented by the runtime. What do I mean by that? Let’s have a look at how any delegate type looks at the IL level:

delegateInIL

All methods are runtime managed, meaning that, in a similar fashion you provide implementation for interface methods, runtime provides implementation for delegate methods. Take a look at the constructor. Regardless of delegate type it always has two arguments: object (on which a method will be invoked via the delegate) and a handle to that method.

While trying (with great help of Kenneth Xu) to find a way of providing proxying of explicitly implemented interface members in Dynamic Proxy, I saw this as a possible way of being able to implement the feature. The problem in this case is, that proxy is an inherited class, that has to call a private method on the base class. So given that Dynamic Proxy works in IL, I tried to call that constructor in IL passing required parameters, but unsurprisingly, runtime does access checks so what I got instead of working delegate was this:

MethodAccessException

 

 

Kenneth suggested another approach – using Delegate.CreateDelegate, because as it turns out, you can successfully use that method to bind delegates to private methods. That is the first important characteristic that may be a life saver in certain cases.

So, instead of this (which won’t compile):

public override int BarMethod(int num1)
{
    Func<int, int> d = new Func<int, int>(this.DynamicProxy.Program.IFoo.BarMethod);
    return d(num1);
}

You can use this, which will both compile and work:

public override int BarMethod(int num1)
{
    Func<int, int> d = Delegate.CreateDelegate(typeof(Func<int, int>), this, barMethodInfo) as Func<int, int>;
    return d(num1);
}

It has a drawback though – it’s order of magnitude slower than creating the delegate via its constructor. Solution brings us to the second characteristic I wanted to talk about – open instance methods.

Each instance method at IL level has an additional argument, implicitly passed this pointer. Armed with this knowledge, let’s read what MSDN says about Delegate.CreateDelegate and open instance methods:

If firstArgument is a null reference and method is an instance method, the result depends on the signatures of the delegate type type and of method:

  • If the signature of type explicitly includes the hidden first parameter of method, the delegate is said to represent an open instance method. When the delegate is invoked, the first argument in the argument list is passed to the hidden instance parameter of method.

  • If the signatures of method and type match (that is, all parameter types are compatible), then the delegate is said to be closed over a null reference. Invoking the delegate is like calling an instance method on a null instance, which is not a particularly useful thing to do.

This basically says that we can bind method to delegate that passes the ‘this’ argument explicitly, so that we can reuse the delegate for multiple instances. Here’s how that works:

private static Func<Foo, int, int> d;
 
static Foo()
{
    d = Delegate.CreateDelegate(typeof(Func<Foo, int, int>), barMethodInfo) as Func<Foo, int, int>;
}
 
public override int BarMethod(int num1)
{
    return d(this, num1);
}

With this approach we pay the price of delegate creation only once per lifetime of the type.

I was curious how all these approaches would perform, so I created a quick poor-man’s benchmark.

I invoked an instance method 100 times using four approaches:

  1. directly
  2. via delegate created using its constructor
  3. via delegate created using CreateDelegate passing first argument explicitly (not null).
  4. via delegate created once using CreateDelegate and then reusing it for subsequent calls.

Here are the results (times are in ticks):

poormansDelegateBenchmark

As you can see, when reusing the instance after 100 calls we can achieve the same performance as when using the delegate’s constructor, which, given we can use in broader spectrum of cases, is pretty amazing.

If you want to run the tests for yourself, the code is here.

Technorati Tags: ,,,

Code generation and DRY

I’m generally against code generation for a variety of reasons I’m not going to talk about here. However in some cases, small, targeted code generation is good, as it helps you avoid repetitive typing in verbose languages like C# without creating unmaintainable blob of poor quality code.

In other words, I very much like being able to go from this:

codegen_before

to this:

codegen_after

by just pressing Enter.