Patrick Smacchia wrote a post where he compares execution time while using different patterns to iterate over a collection, namely List<int> and int. Since he provided the code, I decided to give it a go as well. Only thing I did, was I added [MethodImpl(MethodImplOptions.NoInlining)] for each method, since they all were very simple, and could easily be inlined.
That said, here are my results (release build ran without debugger, outside of Visual Studio):
If you compare those to Patrick’s results, you may notice few things:
- Patrick has a faster PC (which only reminds me I really should start looking for a new one)
- When you turn off inlining results are much different. First, foreach is as efficient on arrays as on lists (which is different than Patrick’s results. My guess is, that since CLR has more intimate knowledge of arrays, it somehow optimized it when inlining was turned on).
- iterating with for loop is faster when you do the trick Patrick called “count optimization”. This is quite the contrary to what Eric Gunnerson said, and I’m puzzled about it.
- DO NOT all go and change your foreaching code to for loops! The numbers you see are number of ticks when iterating over a collection of 100 000 000 elements! That means that in each and every case, iteration is fast as hell, and it’s NOT your bottleneck. To help you visualize that, here’s a picture I borrowed from K. Scott Allen
>That means that in each and every case, iteration is fast as hell, and it’s NOT your bottleneck.
I don’t agree, for some range of algo where nested loop consumes a lot of the CPU it can save you a lot of perf (an it actually does several time in my code). The big penalty seems to be in retrieving the particular element at position i.
*for some range of algo*
In 99% of cases this only obfuscates the code, and wins nothing.
You don’t reference a page on my blog, so I’m not sure *exactly* what you’re referring to, so this is just a guess.
Arrays can never change in length, and the JIT knows that, so doing the “count optimization” doesn’t buy you anything.
Lists can change in length, so:
int count = list.Count;
for (int i = 0; i < count; i++)
is different semantically from:
for (int i = 0; i < list.Count; i++)
In the second case the language must check list.Count every iteration, which is why it’s slower.
If that’s not clear, consider what happens if you the inner loop contains:
Actually I *do* reference a specific page on your blog (in the sentence ‘Eric Gunnerson said’ your name is linked to the main page of your blog, and the word ‘said’ is linked to the following post: blogs.msdn.com/…/115566.aspx
Sorry if it wasn’t clear. I’ll try to be more explicit about such link chains in the future.
What you said in your comment, and in the post totally makes sense, that’s why I was surprised that the code using what Patrick called count optimization on arrays works faster.
I read of one next method of optimization that removes checking for the end of the list/array instead of putting the loop inside a try-catch statement, that will handle the exception at list/array item violation