Talking Shop Down Under podcast about Windsor

Not too long ago Richard Banks was kind enough to have me on his podcast – Talking Shop Down Under. We chatted a little bit about Windsor, including what we’re working on for the next version. If you have spare 30-something minutes head over to the link below to listen to the podcast. And if you’ve never heard about Talking Shop check out also some of the other episodes – there are some really great interviews with some super smart people.

 

Episode 46 – Castle Windsor with Krzysztof Kozmic

Rant – Avoid ebooks from Packt Publishing

Since I got my Kindle DX over 6 months ago I’ve been buying lots of Ebooks. Some from the kindle store, but actually most of them directly from publishers. mostly O’Reilly, and Pragmatic Bookshelf but also few others. Both publishers offer ebooks in variety of formats, including .mobi which works on Kindle (including Kindle apps for Windows/iOS and Android), epub which works on IPad and many other devices. The books are nicely formatted, including great formatting of source code, links are clickable etc.

android_kindle_pragProg

Above you can see an example of that, from a Prag Prog book, screenshot comes from the awesome Kindle for Android app.

I do this introduction to show you that I am used to a high standard of books. Also the fact that most publishers offer books in several formats (.mobi and .epub as a bare minimum, but most of them also offer PDFs that work on mobile devices and O’Reilly even offers .apk for some books). That’s the quality I’m used to and that I expect. And then I bought a book from another publisher – Packt Publishing.

Please be advised that the following is a rant.

tl;dr version – Packt Publishing absolutely sucks. Don’t buy from them.

Fail one – “yes, we have .epub” (no you don’t)

image

The above comes from their website. I really wanted to get .mobi to read it comfortably on my Kindle, but I figured the epub will do. I can read epub on my phone and I thought I’ll just convert it to mobi if I needed. To my surprise and unlike advertised the .epub version of the book was nowhere to be found. I obviously learned that after I payed. To be fair, I checked few weeks ago and there was the epub version. It’s still a fail as they didn’t let me know in any way that it became available.

Fail two – “here’s your epub” (are you kidding me?)

I thought that (finally) getting the epub version would be the end of my problems. I was wrong – turns out they screwed up the formatting of code samples – there’s no indention at all, which makes reading them a horrible experience (and the book I bought has lots of code samples).

image

Screenshot comes from fantastic Aldiko app for Android. To make sure that the formatting issue is not the fault of the app I checked it on two other apps including one on my PC – it’s broken everywhere.

Fail three – “you can read the .pdf on Kindle” (no I can’t)

Back to the purchase day – oh well, I thought, at least I have the .pdf version of the book. I opened it on my PC and it looked all right. So I copied it to my kindle and to my phone to read it on the bus the following day. The following day though when I tried to open it, it would show me the cover but when I tried to move forward to where, you know – the content is, the Kindle would freeze and show me white screen and .pdf viewer app on my Android would just plainly refuse to operate and crash. So the only option I was left with was to read the pdf on my PC.

Fail four – “Our commitment is to reply to all e-mails within 48 business hours.” (Well, it’s been four months…)

That really pissed me off, so I found contact info and send them an email:

image

I’m still waiting for the response.

Fail five – “We won’t spam you anymore. Promise” (oh, btw, here’s our latest offer)

After giving up on them I noticed that they started sending me spam with offers for new books they publish. I complained and (to my surprise) quickly got an email with apologies and promise they won’t spam me again. However here’s what I found in my inbox today (the mail above, below is the other email from some time earlier):

image 

Bottom line

I now really appreciate the effort other publishers put into the high quality ebooks they produce I’m now even more happy to pay for them, knowing how horribly experience with ebook publishers can be.

Windsor’s longest exception message just got longer

exception

just one of the nice improvements we’re cooking for the next version (codename “Wawel”).

The other blog

I started another blog. The topic range is much different from the usual topics discussed here, so instead of polluting this blog and alienating my audience I decided to start a dedicated blog/posterous about it.

This blog will stay as my programming/design/software development blog and is still going to be maintained. The other one is going to be about things I find on the web. Useful apps, interesting and innovative startups (mostly non-US based, that tend to get less publicity in the mainstream media like Techcrunch, Engadget, or Mashable), and all things related to that. Those who follow me on twitter know I tweet that stuff pretty often. Now I wanted to take it to the next level, not be constrained by 140 characters limit and make it less volatile.

If you feel that’s something that might interest you hop on and subscribe.

address: http://foundontheweb.posterous.com/

Git tooling for .NET developers

Many developers working on Windows stay away from Git. There are many reasons for this but from my observations and discussions I’ve had, the most common can be summarized by this tweet by my friend Paul Stovell:

 

can everyone please stop using Git? Mercurial has a better UI (TortoiseHG), I'm sick of Git UI's

I’m not getting into holy wars, and I’m not trying to convince anyone that Git is better than any other VCS. Instead I’ll walk you through the tooling I use to interact with Git on Windows, with Visual Studio.

Git Extensions

First thing you should be getting is Git Extensions.

Git Extensions context menu

With that, similar to TortoiseX family of tools you get nice context menu that gives you access to most common operations quickly, via GUI, and with no need to memorize command line options if you want to avoid it. You can also launch Git command line in the selected folder and then all power of git is at your disposal.

Visual Studio

If you’re a .NET developer, you’ll want to work from within Visual Studio. I’m sure you’ll be happy to learn that Git Extensions has a really nice Visual Studio integration as well.

git_extensions_visual_studio

You get two things – a menu with all the options that Explorer context menu gives you and more, including ability to edit .gitignore file (also the tool will generate a new .gitignore file for you with Visual Studio specific rules!) and to launch a Git bash. Also you get a Git toolbar with most commonly used commands: Commit, browse, push, pull, stash and access to settings.

The way I usually work with it, is I use Git bash for most operations. There’s one exception to that rule though – committing.

git_extensions_commit_window

I think Git Extensions’s Commit window is the best of all VCS I’ve worked with. It clearly separates files you want to commit (in your Index) and files you leave out for now. It clearly shows you status of each file (new, deleted, modified) with distinctive, large icons, also it shows you an on-the-fly diff of what changed in any given file, and it’s blazing fast. Mostly the readability benefits are why I stick to the UI for this operation.

Visual Studio Git Source Control Provider

In addition to Git Extensions I use another tool called Git Source Control Provider which plugs into standard Visual Studio VCS provider mechanism to give me some additional functionality (you can get the tool via Visual Studio extension manager).

git_source_control_provider

There are a few useful capabilities provided by this tool that I tend to rely on quite often (there are more than that as you can read on the tool’s page):

  1. Overlay icons showing you status of each file in Solution Explorer.
  2. It shows you name of the current branch in the Solution Explorer bar at the top (see “(master)” on the screenshot below), and you will work a lot with branches in Git.
  3. It gives some additional options in the context menu.

git_source_control_provider_screenshot

This (plus command line) makes the job very, very simple and quick, and that’s what I stick to on my machine.

There’s one more thing that makes working with Git a pleasure (especially if you’re working on a team that’s not completely co-located).

Github

I love Github. It has a very clean, simple interface that makes going through project history, diffing commits and code reviews a very simple and frictionless process.

 

Summary

Yes, perhaps those tools lack some eye candy that other tools have but frankly – I don’t care, and neither should you. They are more than enough to let you quickly do whatever you need to do with your code and don’t stand in your way. And that’s what a good VCS and tooling around it should be – something you don’t really have to think about and you can rely on to keep track of what is happening with your code with confidence. And that’s precisely what Git is – so if you’ve been held back, go ahead, install those tools and give Git a shot – you won’t look back.

Lock-free thread safe initialisation in C#

I used to do quite a lot of concurrent programming early in my career. Recently I’ve been working again on optimising a large piece of code to behave better when ran in concurrent environment and I must share a secret with you – I had a great fun. It all started with a semi-serious challenge from a friend, and ended up consuming big part of my Christmas-new year break.

One of the problems I found, was that a group of objects that was being used on multiple threads had to be initialised in a thread safe manner – that is the initialisation code had to be ran just once, atomically, on a single thread. However afterwards it could be safely executed concurrently on multiple threads. This was being accomplished by using code similar to following:

public void Foo GetInitializedFoo()
{
   if(this.fooIsInitialized) return foo;
   lock(lockObject)
   {
      if(this.fooIsInitialized) return foo;
      
      InitializeFoo();
      return foo;
   }
}

As you can see the code usually works lock-free, and only the first time when foo is not initialised it has to lock to perform the initialisation algorithm. The reason that was suboptimal is that in heavily multi-threaded scenarios it yielded suboptimal performance when multiple threads were trying to initialise foo.

Imagine there are four threads. First of them succeeds in obtaining the lock, and goes ahead to perform the initialisation, while the other two go into the waiting queue and sleep there waiting for the lock to be freed. The first thread finishes the initialisation pretty soon, but by the time the first thread wakes up already quite a lot of time is wasted. Then the second thread obtains the lock, but the remaining two, even though the variable is already initialised and therefore safe to be read concurrently, still wait in the lock queue. The three remaining threads could execute all at once now, yet they will be woken up and executed each in turn.

Mad scientist solution

The problem was that using lock proved to be suboptimal for the problem at hand. To try to address this (and to make my inner geek happy) I decided to look at making this code perform better.

While .NET framework library has lots of synchronisation primitives none of them really seemed to be any better suited for the job then good old Monitor. I didn’t want to put waiting threads to sleep immediately, as this is quite an expensive operation (and so is waking them up back again), and the initialisation would usually be performed quite quickly. I also wanted all the waiting threads to be able to execute concurrently with no synchronisation after the first one of them finished initialisation. To accomplish this I came up with the following solution:

public sealed class ThreadSafeInit
{
	// We are free to use negative values here, as ManagedTreadId is guaranteed to be always > 0
	// http://www.netframeworkdev.com/net-base-class-library/threadmanagedthreadid-18626.shtml
	// the ids can be recycled as mentioned, but that does not affect us since at any given point in time
	// there can be no two threads with the same managed id, and that's all we care about
	private const int Initialized = int.MinValue + 1;
	private const int NotInitialized = int.MinValue;
	private int state = NotInitialized;

	public void EndThreadSafeOnceSection()
	{
		if (state == Initialized)
		{
			return;
		}
		if (state == Thread.CurrentThread.ManagedThreadId)
		{
			state = Initialized;
		}
	}

	public bool ExecuteThreadSafeOnce()
	{
		if (state == Initialized)
		{
			return false;
		}
		var inProgressByThisThread = Thread.CurrentThread.ManagedThreadId;
		var preexistingState = Interlocked.CompareExchange(ref state, inProgressByThisThread, NotInitialized);
		if (preexistingState == NotInitialized)
		{
			return true;
		}
		if (preexistingState == Initialized || preexistingState == inProgressByThisThread)
		{
			return false;
		}

		var spinWait = new SpinWait();
		while (state != Initialized)
		{
			spinWait.SpinOnce();
		}
		return false;
	}
}

The ThreadSafeInit class manages lock-free access to part of codebase. It has a state int flag which starts off in a non-initialised state. Then as ExecuteThreadSafeOnce method, which marks the beginning of the initialisation section is being called, it uses Interlocked.CompareExchange to set the flag in a lock-free thread safe manned to the id of current thread. The thread that succeeds returns true and is free to go ahead and do the initialisation. Any other thread waits for the first one to finish, but instead of going immediately to sleep it spins for a while, effectively wasting cycles, and then once the first thread finishes, and calls EndThreadSafeOnceSection method, all the other threads can proceed in parallel.

Usage pattern looks something like this:

public void Foo GetInitializedFoo()
{
   if(this.fooIsInitialized) return foo;
   var initialising = false;
   try
   {
      initializing = init.ExecuteThreadSafeOnce();
      if(this.fooIsInitialized) return foo;
      
      InitializeFoo();
      return foo;
   }
   finally
   {
      if(initialising)
      {
         init.EndThreadSafeOnceSection();
      }
   }
}

Using this code I was able to measure over two-fold improvement over the old solution in this part of code. Sure, it’s a micro-optimisation and in 99.9% of cases lock would be just fine, but as I said, it was a challenge, and I had a blast working on this, and that’s all I care about. Well, not all perhaps. If you see any flaws in this code, ways to improve it, or I just replicated something that someone else did much better long time ago, let me know.

C# dynamic and generics dispatch

I remember when C# 3 came out, I immediately started using pretty much all of the new features in it and I loved it. Imagine developing without var, automatic properties, lambdas, extension methods including those in .Linq namespace.

With C# 4 it’s been different. Even though it’s been several months since I started using Visual Studio 2010 and .NET 4, except for generic variance and occasional use of named arguments I hardly use any of the new features. Especially, I couldn’t find a good use case for the most hyped new feature – the dynamic keyword.

Until today, that is.

Short introduction to the problem

The application I’m working on right now, has forms (you wouldn’t expect that, would you?). Each of them collects some data in steps and based on answers from previous step, its own state etc, it decides whether or not it needs to collect more info or “does stuff”.

So the C# 3 version…

Each form (questionnaire) implements a common interface and various (multiple!) closed version of IHandleData<TData> interface for each type of data it collects from the user. The solution is very loosely generic, so the code that wires those things together does not know any details about the types it’s working with. All it knows is that it gets two objects – one of them is the data type, and the other implements generic IHandleData closed over the former type. It then has to somehow invoke the right one (of many) Handle method on the first object, passing the second one as the argument.

To handle this, we came up with code resembling the following (this code is simplified – the actual solution is a tad more elaborate):

var viewType = data.GetType();
var closedType = typeof(IHandleData<>).MakeGenericType(viewType);
var handle = closedType.GetMethod("Handle");
handle.Invoke(questionnaire, new[] { data });

There’s quite a lot of ceremony here, to work around the fact that at compile time we don’t have enough type information (and that type information are different from invocation to invocation) to satisfy the compiler.

Then again in C# 4…

In situations like this, involving generics and dynamic dispatch, you can leverage the new C# 4 capabilities, to make the code much neater and much more concise:

dynamic data = GetData();
dynamic questionaire = GetQuestionaire();
questionaire.Handle(data);

The additional benefit to vastly improved readability of the code is better performance. A useful thing to keep on the back of your head.

How to make sharing code between .NET and Silverlight (a little) less painful

Windsor ships for both .NET and Silverlight. However working with the codebase in such a way that we can target both runtimes at the same  time with the least amount of friction possible proved to be quite a challenge.

Problem

We’re using single .sln and .csproj files to target both runtimes (actually 5 of them, as we ship for 3 versions of .NET and 2 versions of Silverlight, but I digress) with some heavy (and I mean-  really heavy) Jedi trickery in MsBuild by Roelof.

We’re using conditional compilation to cater for incompatibilities between frameworks. For example, SerializableAttribute and NonSerializedAttribute are not present in Silverlight, so many of our classes look like this:

#if !SILVERLIGHT
   [Serializable]
#endif
   public class ParameterModel
   {
      // some code here
   }

This is quite cluttering the code making it harder to read. It adds friction to the process, since I have to remember each time to exclude the attribute from Silverlight build. Even more annoying is the fact, that ReSharper is not very fond of pre-processor directives in code, and certain features, like code clean up and reordering don’t really work the way they should.

Solution

I figured out a way to cut the need to use conditional compilation in this case. So that I can write the code like this:

[Serializable]
public class ParameterModel
{
   // some code here
}

and still have it compile properly under Silverlight, outputting exactly the same code as the first sample. How you ask? – using a not very well known feature of .NET BCL – ConditionalAttribute.

Entire trick works like this: I replicate the Attributes conditionally for Silverlight builds, applying ConditionalAttribute on top of them with some fake condition that never will be true. That way the compiler will not complain when I compile the code, because the attributes types will be defined and visible, however thanks to ConditionalAttribute they won’t be applied to any of my types. To make them not visible to the outside world, I make them internal. Here’s what it looks like:

#if SILVERLIGHT
namespace System
{
    using System.Diagnostics;

    [Conditional("THIS_IS_NEVER_TRUE")]
    internal class SerializableAttribute : Attribute
    {
    }

    [Conditional("THIS_IS_NEVER_TRUE")]
    internal class NonSerializedAttribute : Attribute
    {
    }
}
#endif

I am on a horse.

Castle Windsor/Core 2.5.2

New Windsor logo

Today we released second bugfix release for Windsor 2.5 and for Core 2.5 (incl. DynamicProxy, which is what probably is the most interesting to people). This time there are no new features, only fixes to issues identified since the last release (you can see the entire list in changes.txt). We have however updated to NLog 2, and since it works in Silverlight, this release also has Logging Facility for Silverlight. This is very likely the last release in 2.5 branch, and all development now moves to vNext.

One thing you may notice, is that while file number was bumped to 2.5.2, assembly version still shows 2.5.1. This was requested by several users who didn’t want to have to recompile all third party dependencies using Windsor 2.5.1 again, in order to use the new version.

If you’re in such situation, you may drop and replace older assemblies with the new ones, and in 99% of cases you should be just fine. However in order to fix some issues, there were few breaking changes, so if the other projects you’re using were depending on the changed code you will have to upgrade them to version compatible with Windsor 2.5.2 anyway.

New logo

One more noteworthy development in Castle land, as you may have noticed if you visited Castle Project website recently is that we have a new logo (or rather – set of logos) created by my very talented wife.

Now back to me – the downloads are here (Windsor and Core) and here (just Core) – enjoy. I am on a horse.

To IntelliSense or not to IntelliSense, is that even a question?

UPDATE (tl;dr version):

As it appears people seem not to read the post before commenting. I’m not questioning Intellisense’s value. I’m questioning why .NET developers get panic attack when someone mentions “let’s use X” where there’s no IntelliSense support for X.

As ASP.NET MVC team lately pushes towards third release of their framework including new out of the box view engine, more and more I hear people complaining about the current preview release of it, for one simple reason – it does not have IntelliSense (yet).

This reminds me of similar approach people had for some other view engines like nhaml, NVelocity, BRail both in ASP.NET MVC and Monorail. This can be also extended to not using Boo which is an awesome .NET language that should get everyone excited, but it’s not – because there’s not good tooling story around it Visual Studio (yes, I know about the efforts to bring that to some degree).

So what’s the problem?

I ask you. Ruby community, which seems to be the fastest growing development community at the moment uses what most .NET or Java developers would call “primitive tools”, with hardly even syntax coloring, yet what they seem to be emphasizing the most when they compare Ruby to other platforms is productivity gains.

So is refusal to use something unless it provides IntelliSense just an inner fear in .NET developers? I would envision it to experience children have when they learnt how to ride a bike with additional wheels on each side, and now the additional wheels were taken away.

Isn’t this behavior as irrational?

 

[#Beginning of Shooting Data Section]
Nikon D2Xs
2009/07/03 11:30:57.3
Tiff-RGB (8-bit)
Image Size: Large (4288 x 2848)
Color
Lens: 24-70 mm F/2.8 G
Focal Length: 58 mm
Exposure Mode: Manual
Metering Mode: Multi-Pattern
1/180 s - F/11
Exposure Comp.: 0 EV
Sensitivity: ISO 100
Optimize Image:
White Balance: Color Temp. (4800K)
Focus Mode: AF-S
VR Control: OFF
Long Exposure NR: OFF
High ISO NR: Off
Color Mode: Mode II (Adobe RGB)
Tone Comp.: Normal
Hue Adjustment: 0¡
Saturation: Normal
Sharpening: Medium high
Flash Mode:  
Flash Exposure Comp.:  
Flash Sync Mode:  
Image Authentication: OFF
Image Comment: (C)2008 HALL IMAGE PHOTOGRAPHY      
[#End of Shooting Data Section]