Category Archives: Parallelization

Revisiting randomness

Almost every Stack Overflow question which includes the words "random" and "repeated" has the same basic answer. It’s one of the most common "gotchas" in .NET, Java, and no doubt other platforms: creating a new random number generator without specifying a seed will depend on the current instant of time. The current time as measured by the computer doesn’t change very often compared with how often you can create and use a random number generator – so code which repeatedly creates a new instance of Random and uses it once will end up showing a lot of repetition.

One common solution is to use a static field to store a single instance of Random and reuse it. That’s okay in Java (where Random is thread-safe) but it’s not so good in .NET – if you use the same instance repeatedly from .NET, you can corrupt the internal data structures.

A long time ago, I created a StaticRandom class in MiscUtil – essentially, it was just a bunch of static methods (to mirror the instance methods found in Random) wrapping a single instance of Random and locking appropriately. This allows you to just call StaticRandom.Next(1, 7) to roll a die, for example. However, it has a couple of problems:

  • It doesn’t scale well in a multi-threaded environment. When I originally wrote it, I benchmarked an alternative approach using [ThreadStatic] and at the time, locking won (at least on my computer, which may well have only had a single core).
  • It doesn’t provide any way of getting at an instance of Random, other than by using new Random(StaticRandom.Next()).

The latter point is mostly a problem because it encourages a style of coding where you just use StaticRandom.Next(…) any time you want a random number. This is undoubtedly convenient in some situations, but it goes against the idea of treating a source of randomness as a service or dependency. It makes it harder to get repeatability and to see what needs that dependency.

I could have just added a method generating a new instance into the existing class, but I decided to play with a different approach – going back to per-thread instances, but this time using the ThreadLocal<T> class introduced in .NET 4.0. Here’s the resulting code:

using System;
using System.Threading;

namespace RandomDemo
{
    /// <summary>
    /// Convenience class for dealing with randomness.
    /// </summary>
    public static class ThreadLocalRandom
    {
        /// <summary>
        /// Random number generator used to generate seeds,
        /// which are then used to create new random number
        /// generators on a per-thread basis.
        /// </summary>
        private static readonly Random globalRandom = new Random();
        private static readonly object globalLock = new object();

        /// <summary>
        /// Random number generator
        /// </summary>
        private static readonly ThreadLocal<Random> threadRandom = new ThreadLocal<Random>(NewRandom);

        /// <summary>
        /// Creates a new instance of Random. The seed is derived
        /// from a global (static) instance of Random, rather
        /// than time.
        /// </summary>
        public static Random NewRandom()
        {
            lock (globalLock)
            {
                return new Random(globalRandom.Next());
            }
        }

        /// <summary>
        /// Returns an instance of Random which can be used freely
        /// within the current thread.
        /// </summary>
        public static Random Instance { get { return threadRandom.Value; } }

        /// <summary>See <see cref="Random.Next()" /></summary>
        public static int Next()
        {
            return Instance.Next();
        }

        /// <summary>See <see cref="Random.Next(int)" /></summary>
        public static int Next(int maxValue)
        {
            return Instance.Next(maxValue);
        }

        /// <summary>See <see cref="Random.Next(int, int)" /></summary>
        public static int Next(int minValue, int maxValue)
        {
            return Instance.Next(minValue, maxValue);
        }

        /// <summary>See <see cref="Random.NextDouble()" /></summary>
        public static double NextDouble()
        {
            return Instance.NextDouble();
        }

        /// <summary>See <see cref="Random.NextBytes(byte[])" /></summary>
        public static void NextBytes(byte[] buffer)
        {
            Instance.NextBytes(buffer);
        }
    }
}

The user can still call the static Next(…) methods if they want, but they can also get at the thread-local instance of Random by calling ThreadLocalRandom.Instance – or easily create a new instance with ThreadLocalRandom.NewRandom(). (The fact that NewRandom uses the global instance rather than the thread-local one is an implementation detail really; it happens to be convenient from the point of view of the ThreadLocal<T> constructor. It wouldn’t be terribly hard to change this.)

Now it’s easy to write a method which needs randomness (e.g. to shuffle a deck of cards) and give it a Random parameter, then call it using the thread-local instance:

public void Shuffle(Random rng)
{
    …
}

deck.Shuffle(ThreadLocalRandom.Instance);

The Shuffle method is then easier to test and debug, and expresses its dependency explicitly.

Performance

I tested the "old" and "new" implementations in a very simple way – for varying numbers of threads, I called Next() a fixed number of times (from each thread) and timed how long it took for all the threads to finish. I’ve also tried a .NET-3.5-compatible version using ThreadStatic instead of ThreadLocal<T>, and running the original code and the ThreadStatic version on .NET 3.5 as well.

Threads StaticRandom (4.0b2) ThreadLocalRandom (4.0b2) ThreadStaticRandom (4.0b2) StaticRandom(3.5) ThreadStaticRandom (3.5)
1 11582 6016 10150 10373 16049
2 24667 7214 9043 25062 17257
3 38095 10295 14771 36827 25625
4 49402 13435 19116 47882 34415

A few things to take away from this:

  • The numbers were slightly erratic; somehow it was quicker to do twice the work with ThreadStaticRandom on .NET 4.0b2! This isn’t the perfect benchmarking machine; we’re interested in trends rather than absolute figures.
  • Locking hasn’t changed much in performance between framework versions
  • ThreadStatic has improved massively between .NET 3.5 and 4.0
  • Even on 3.5, ThreadStatic wins over a global lock as soon as there’s contention

The only downside to the ThreadLocal<T> version is that it requires .NET 4.0 – but the ThreadStatic version works pretty well too.

It’s worth bearing in mind that of course this is testing the highly unusual situation where there’s a lot of contention in the global lock version. The performance difference in the single-threaded version where the lock is always uncontended is still present, but very small.

Update

After reading the comments and thinking further, I would indeed get rid of the static methods elsewhere. Also, for the purposes of dependency injection, I agree that it’s a good idea to have a factory interface where that’s not overkill. The factory implementation could use either the ThreadLocal or ThreadStatic implementations, or effectively use the global lock version (by having its own instance of Random and a lock). In many cases I’d regard that as overkill, however.

One other interesting option would be to create a thread-safe instance of Random to start with, which delegated to thread-local "normal" implementations. That would be very useful from a DI standpoint.

Iterating atomically

The IEnumerable<T> and IEnumerator<T> interfaces in .NET are interesting. They crop up an awful lot, but hardly anyone ever calls them directly – you almost always use a foreach loop to iterate over the collection. That hides all the calls to GetEnumerator(), MoveNext() and Current. Likewise iterator blocks hide the details when you want to implement the interfaces. However, sometimes details matter – such as for this recent Stack Overflow question. The question asks how to create a thread-safe iterator – one that can be called from multiple threads. This is not about iterating over a collection n times independently on n different threads – this is about iterating over a collection once without skipping or duplicating. Imagine it’s some set of jobs that we have to complete. We assume that the iterator itself is thread-safe to the extent that calls from different threads at different times, with intervening locks will be handled reasonably. This is reasonable – basically, so long as it isn’t going out of its way to be thread-hostile, we should be okay. We also assume that no-one is trying to write to the collection at the same time.

Sounds easy, right? Well, no… because the IEnumerator<T> interface has two members which we effectively want to call atomically. In particular, we don’t want the collection { “a”, “b” } to be iterated like this:

Thread 1 Thread 2
MoveNext()  
  MoveNext()
Current  
  Current

That way we’ll end up not processing the first item at all, and the second item twice.

There are two ways of approaching this problem. In both cases I’ve started with IEnumerable<T> for consistency, but in fact it’s IEnumerator<T> which is the interesting bit. In particular, we’re not going to be able to iterate over our result anyway, as each thread needs to have the same IEnumerator<T> – which it won’t do if each of them uses foreach (which calls GetEnumerator() to start with).

Fix the interface

First we’ll try to fix the interface to look how it should have looked to start with, at least from the point of view of atomicity. Here are the new interfaces:

public interface IAtomicEnumerable<T>
{
    IAtomicEnumerator<T> GetEnumerator();
}

public interface IAtomicEnumerator<T>
{
    bool TryMoveNext(out T nextValue);
}

One thing you may notice is that we’re not implementing IDisposable. That’s basically because it’s a pain to do so when you think about a multi-threaded environment. Indeed, it’s possibly one of the biggest arguments against something of this nature. At what point do you dispose? Just because one thread finished doesn’t mean that the rest of them have… don’t forget that “finish” might mean “an exception was thrown while processing the job, I’m bailing out”. You’d need some sort of co-ordinator to make sure that everyone is finished before you actually do any clean-up. Anyway, the nice thing about this being a blog post is we can ignore that little thorny issue :)

The important point is that we now have a single method in IAtomicEnumerator<T> – TryMoveNext, which works the way you’d expect it to. It atomically attempts to move to the next item, returns whether or not it succeeded, and sets an out parameter with the next value if it did succeed. Now there’s no chance of two threads using the method and stomping on each other’s values (unless they’re silly and use the same variable for the out parameter).

It’s reasonably easy to wrap the standard interfaces in order to implement this interface:

/// <summary>
/// Wraps a normal IEnumerable[T] up to implement IAtomicEnumerable[T].
/// </summary>
public sealed class AtomicEnumerable<T> : IAtomicEnumerable<T>
{
    private readonly IEnumerable<T> original;

    public AtomicEnumerable(IEnumerable<T> original)
    {
        this.original = original;
    }

    public IAtomicEnumerator<T> GetEnumerator()
    {
        return new AtomicEnumerator(original.GetEnumerator());
    }

    /// <summary>
    /// Implementation of IAtomicEnumerator[T] to wrap IEnumerator[T].
    /// </summary>
    private sealed class AtomicEnumerator : IAtomicEnumerator<T>
    {
        private readonly IEnumerator<T> original;
        private readonly object padlock = new object();

        internal AtomicEnumerator(IEnumerator<T> original)
        {
            this.original = original;
        }

        public bool TryMoveNext(out T value)
        {
            lock (padlock)
            {
                bool hadNext = original.MoveNext();
                value = hadNext ? original.Current : default(T);
                return hadNext;
            }
        }
    }
}

Just ignore the fact that I never dispose of the original IEnumerator<T> :)

We use a simple lock to make sure that MoveNext() and Current always happen together – that nothing else is going to call MoveNext() between our TryMoveNext() calling it, and it fetching the current value.

Obviously you’d need to write your own code to actually use this sort of iterator, but it would be quite simple:

T value;
while (iterator.TryMoveNext(out value))
{
    // Use value
}

However, you may already have code which wants to use an IEnumerator<T>. Let’s see what else we can do.

Using thread local variables to fake it

.NET 4.0 has a very useful type called ThreadLocal<T>. It does basically what you’d expect it to, with nice features such as being able to supply a delegate to be executed on each thread to provide the initial value. We can use a thread local to make sure that so long as we call both MoveNext() and Current atomically when we’re asked to move to the next element, we can get back the right value for Current later on. It has to be thread local because we’re sharing a single IEnumerator<T> across multiple threads – each needs its own separate storage.

This is also the approach we’d use if we wanted to wrap an IAtomicEnumerator<T> in an IEnumerator<T>, by the way. Here’s the code to do it:

public class ThreadSafeEnumerable<T> : IEnumerable<T>
{
    private readonly IEnumerable<T> original;

    public ThreadSafeEnumerable(IEnumerable<T> original)
    {
        this.original = original;
    }

    public IEnumerator<T> GetEnumerator()
    {
        return new ThreadSafeEnumerator(original.GetEnumerator());
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }

    private sealed class ThreadSafeEnumerator : IEnumerator<T>
    {
        private readonly IEnumerator<T> original;
        private readonly object padlock = new object();
        private readonly ThreadLocal<T> current = new ThreadLocal<T>();

        internal ThreadSafeEnumerator(IEnumerator<T> original)
        {
            this.original = original;
        }

        public bool MoveNext()
        {
            lock (padlock)
            {
                bool ret = original.MoveNext();
                if (ret)
                {
                    current.Value = original.Current;
                }
                return ret;
            }
        }

        public T Current
        {
            get { return current.Value; }
        }

        public void Dispose()
        {
            original.Dispose();
            current.Dispose();
        }

        object IEnumerator.Current
        {
            get { return Current; }
        }

        public void Reset()
        {
            throw new NotSupportedException();
        }
    }
}

I’m going to say it one last time – we’re broken when it comes to disposal. There’s no way of safely disposing of the original iterator at “just the right time” when everyone’s finished with it. Oh well.

Other than that, it’s quite simple. This code has the serendipitous property of actually implementing IEnumerator<T> slightly better than C#-compiler-generated implementations from iterator blocks – if you call the Current property without having called MoveNext(), this will throw an InvalidOperationException, just as the documentation says it should. (It doesn’t do the same at the end, admittedly, but that’s fixable if we really wanted to be pedantic.

Conclusion

I found this an intriguing little problem. I think there are better ways of solving the bigger picture – a co-ordinator which takes care of disposing exactly once, and which possibly mediates the original iterator etc is probably the way forward… but I enjoyed thinking about the nitty gritty.

Generally speaking, I prefer the first of these approaches. Thread local variables always feel like a bit of a grotty hack to me – they can be useful, but it’s better to avoid them if you can. It’s interesting to see how an interface can be inherently thread-friendly or not.

One last word of warning – this code is completely untested. It builds, and I can’t immediately see why it wouldn’t work, but I’m making no guarantees…

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

Buffering vs streaming benchmark: first results

My poor laptop’s had a busy weekend. It’s run 72 tests, rebooting between each test. Most of these tests have kept both the CPU and disk busy for a lot of the time. I expect to update this blog post with more numbers – and possibly more strategies – as time goes on, but I wanted to get the numbers out as quickly as possible. Graphs should be coming when I’ve got a decent network to experiment with Google graphing.

Source code and setup

While I don’t really expect anyone else to sacrifice their computer for the best part of 24 hours, it doesn’t take very much effort to get the tests running. This zip file contains the source code for three programs, a batch file to generate the test data, and a command file.

The CreateFiles executable creates input files based on its command line arguments. (This is run multiple times by the batch file.) EncryptFiles “encrypts” (I use the term loosely) all the files in a given directory, writing the results to another directory. The class design of EncryptFiles is a little bit funny, but it’s really tailored for these tests. It should be easy to write other strategies in the same way too.

ExecuteAndReboot is used to automate the testing. Basically it reads a command file (from a hard-coded directory; simplicity trumps flexibility here). If the file isn’t empty, it waits five minutes (so that after a reboot the computer has time to settle down after loading all its services) and runs the command specified in the first line of the file, appending the output to a results file. When the command has finished executing, ExecuteAndReboot rewrites the command file, removing the first line, and schedules a reboot. The idea is to drop ExecuteAndReboot into a Startup folder, set Windows to log in automatically, reboot once and then just let it go. If you want to use the computer, just wait until it’s in the “waiting five minutes” bit at the start of the cycle and kill ExecuteAndReboot. Next time you reboot it will start testing again.

That’s the general principle – if anyone does want to run the tests on their mail and needs a bit of help setting it up, just let me know.

Test environment and workload

I expect the details of the execution environment to affect the outcome significantly. For the sake of the record then, my laptop specs are:

Dell Inspiron 1720 (unlikely to be relevant, but…)
Intel Core 2 Duo T7250 2.0GHz
Vista Home Premium 32 bit
3 GB main memory, 32K L1 cache, 2048K L2 cache
2 disks, both 120GB (Fujitsu MHY2120BH, 5400RPM, 8MB cache)

Even though there are two drives in the system, the tests only ever read from and wrote to a single disk.

I created four sets of input data; I’ve presented the results for each one separately below, and included the description of the files along with the results. Basically in each case there’s a set of files where each file is the same size, consisting of a fixed number of fixed-length lines.

The “encryption” that is performed in each case (on the encoded version of the line) is just to XOR each byte with a value. This is repeated a specified number of times, and the XOR value on each pass is the current index, i.e. it first XORs with 0, then 1, then 2 etc. I’ve used work levels of 0 (the loop never runs), 100 and 1000. These were picked somewhat arbitrarily, but they give interesting results. At 0 the process is clearly IO-bound. At 1000 it’s clearly CPU-bound. At 100 it’s not pegging the CPU, but there’s significant activity. I may try 200 as well at some point.

For each scenario I tried using 1, 2 and 3 threads (and 4 for the big tests; I’ll go back and repeat earlier ones with 4 threads soon). The two strategies used are “streaming”: read a line, encrypt, write the line, repeat and “buffering”: read a whole file, encrypt it all, write it all out. After a thread has started, it needs no shared data at all – it’s given the complete list of files to encrypt at the very start.

Results

For every table, the number of threads is shown by the leftmost column, and the work level is shown by the topmost row. The cells are of the form “x/y” where “x” is the time taken for the streaming strategy, and “y” is the time taken for the buffering strategy. All times are in seconds. For each column, I’ve highlighted the optimal test.

Small tests

There are 10000 test files. Each is 100 lines long, with 100 characters per line.

Threads/Work 0 100 1000
1 47/90 94/79 163/158
2 95/112 113/157 129/146
3 179/128 130/144 152/160
4 144/139 163/193 156/204

Medium tests (short lines)

There are 100 test files. Each is 500,000 lines long, with 20 characters per line.

Threads/Work 0 100 1000
1 138/157 219/267 1125/1205
2 196/238 259/263 604/691
3 292/250 312/236 624/676
4 302/291 321/281 602/666

Medium tests (long lines)

There are 100 test files. Each is 100 lines long, with 100,000 characters per line.

Threads/Work 0 100 1000
1 203/213 249/286 1027/1107
2 283/264 311/296 602/696
3 319/284 314/278 608/618
4 360/300 336/297 607/598

Big tests

There are 20 files. Each file is 100,000 lines long, with 1000 characters per line.

Threads/Work 0 100 1000
1 190/246 349/441 2047/2165
2 365/375 392/473 1097/1387
3 456/379 484/399 1161/1296
4 517/442 521/431 1113/1214

Conclusions

I’m loathe to draw too many conclusions so far. After all, we’ve only got data for a single test environment. It would be interesting to see what would happen on a quad core machine. However, I think a few things can be said with reasonable confidence:

  • Sometimes the streaming strategy wins, sometimes the buffering strategy wins. The difference can be quite significant either way. For a given input and workload, the streaming solution almost always wins on my machine.
  • In ideal scenario, the bottlenecked resource should be busy for as much of the time as possible. For example, in a high-CPU task it makes little sense to have idle cores when you’ve already loaded data. Likewise in a disk-bound task we should always be fetching (or writing) data even if all our cores are momentarily busy. Personally I think this is the operating system’s pre-fetch systems’s job. We could do it ourselves, but it’s nicer if we don’t have to.
  • If we’re disk-bound, it makes sense to read (or write) one file at a time, to avoid seeking. This is obvious from the results – for work levels of 0 and 100, a single-thread solution is consistently best (and streaming usually works better buffering). Based on this assumption, I suspect a cleaner solution would be to have a single thread reading largish chunks (but not the whole file) and feeding the other threads – or to use async IO. However, this all gets very complicated. One nice aspect of both of the current solutions is that they’re really simple to implement safely. Reading line-by-line in an async manner is a pain. I’d quite like to write an async “execute for every line” helper at some time, but not just yet.
  • If we’re CPU bound, the buffering solution’s sweet spot is 3 threads (remember we’re using 2 cores) and the streaming solution works better with just 2 threads – introducing more threads will make both the IO and context switching less efficient.
  • The streaming strategy always uses less application memory than the buffering strategy. When running the buffering strategy against the “big” data set with 4 threads, the memory varied between 800MB and 1.8GB (as measured by the “Total bytes in all heaps” performance counter), vs around 165KB with the streaming strategy. (Obviously the process itself took more than 165KB, but that was the memory used in “normal” managed heap memory.) Using more memory to improve performance is a valid technique, but I’d say it’s only worth it when the improvement is very significant. Additionally, the streaming technique instantly scales to huge data files. While they’re not in the current requirements, it doesn’t hurt to get that ability for free.

Next steps…

I’ve included quite a few “I’m going to try to […]” points in this post. Just for reference, my plans are:

  • Turn off my virus checker! If this makes a significant difference, I’ll rerun all the current tests
  • Try to visualise the results with Google graphs – although I think it may get cluttered
  • Rerun big/buffering/1000 test – odd results
  • Rerun tests with a work level of 200 and keep an eye on CPU usage – I’d like to spot where we tip between CPU and IO being the bottleneck
  • Try to optimize the Stream/StreamReader being used – we should be able to skip the FileStream buffer completely, and specifying FileOptions.SequentialScan should give more hints to Windows about how we’re intending to read the data. I’ll try a few different buffer sizes, probably 4K, 8K and 64K, just to see what happens.

This is a really interesting little example problem, because it sounds so simple (and it is simple in terms of the implementation) but it has a lot of environment variables. Of course it would be nicer if we didn’t have to reboot the machine between test runs in order to get a fair result, but never mind…

Benchmarking IO: buffering vs streaming

I mentioned in my recent book review that I was concerned about a recommendation to load all of the data from an input file before processing all of it. This seems to me to be a bad idea in an age where Windows prefetch will anticipate what data you need next, etc – allowing you to process efficiently in a streaming fashion.

However, without any benchmarks I’m just guessing. I’d like to set up a benchmark to test this – it’s an interesting problem which I suspect has lots of nuances. This isn’t about trying to prove the book wrong – it’s about investigating a problem which sounds relatively simple, but could well not be. I wouldn’t be at all surprised to see that in some cases the streaming solution is faster, and in other cases the buffered solution is faster.

The Task

The situation presented is like this:

  • We have a bunch of input files, either locally or on the network (I’m probably just going to test locally for now)
  • Each file is less than 100MB
  • We need to encrypt each line of text in each input file, writing it to a corresponding output file

The method suggested in the book is for each thread to:

  1. Load a file into a List<string>
  2. Encrypt every line (replacing it in the list)
  3. Save to a new file

My alternative option is:

  1. Open a TextReader and a TextWriter for the input/output
  2. Repeatedly read a line, encrypt, write the encrypted line until we’ve exhausted the input file
  3. Close both the reader and the writer

These are the two implementations I want to test. I strongly suspect that the optimal solution would involve async IO, but doing an async version of ReadLine is a real pain for various reasons. I’m going to keep it simple – using plain threading, no TPL etc.

I haven’t written any code yet. This is where you come in – not to write the code for me, but to make sure I test in a useful way.

Environmental variations

My plan of attack is to first write a small program to generate the input files. These will just be random text files, and the program will have a few command line parameters:

  • Directory to put files under (one per test variation, basically)
  • Number of files to create
  • Number of lines per file
  • Number of characters per line

I’ll probably test a high and a low number for each of the last three parameters, possibly omitting a few variations for practical reasons.

In an ideal world I’d test on several different computers, locally and networked, but that just isn’t practical. In particular I’d be interested to see how much difference an SSD (low seek time) makes to this test. I’ll be using my normal laptop, which is a dual core Core Duo with two normal laptop disks. I may well try using different drives for reading and writing to see how much difference that makes.

Benchmarking

The benchmark program will also have a few command line parameters:

  • Directory to read files from
  • Directory to write files to
  • Number of threads to use (in some cases I suspect that more threads than cores will be useful, to avoid cores idling while data is read for a blocking thread)
  • Strategy to use (buffered or streaming)
  • Encryption work level

The first three parameters here are pretty self-explanatory, but the encryption work level isn’t. Basically I want to be able to vary the difficulty of the task, which will vary whether it ends up being CPU-bound or IO-bound (I expect). So, for a particular line I will:

  • Convert to binary (using Encoding.ASCII – I’ll generate just ASCII files)
  • Encrypt the binary data
  • Encrypt the encrypted binary data
  • Encrypt the encrypted encrypted […] etc until we’ve hit the number given by the encryption work level
  • Base64 encode the result – this will be the output line

So with an encryption work level of 1 I’ll just encrypt once. With a work level of 2 I’ll encrypt twice, etc. This is purely for the sake of giving the computer something to do. I’ll use AES unless anyone has a better suggestion. (Another option would be to just use an XOR or something else incredibly simple.) The key/IV will be fixed for all tests, just in case that has a bearing on anything.

The benchmarking program is going to be as simple as I can possibly make it:

  • Start a stopwatch
  • Read the names of all the files in the directory
  • Create a list of files for each thread to encrypt
  • Create and start the threads
  • Use Thread.Join on all the threads
  • Stop the stopwatch and report the time taken

No rendezvous required at all, which certainly simplifies things. By creating the work list before the thread, I don’t need to worry about memory model issues. It should all just be fine.

In the absence of a better way of emptying all the file read caches (at the Windows and disk levels) I plan to reboot my computer between test runs (which makes it pretty expensive in terms of time spent – hence omitting some variations). I wasn’t planning on shutting services etc down: I really hope that Vista won’t do anything silly like trying to index the disk while I’ve got a heavy load going. Obviously I won’t run any other applications at the same time.

If anyone has any suggested changes, I’d be very glad to hear them. Have I missed anything? Should I run a test where the file sizes vary? Is there a better way of flushing all caches than rebooting?

I don’t know exactly when I’m going to find time to do all of this, but I’ll get there eventually :)

Book Review: C# 2008 and 2005 Threaded Programming: Beginner’s Guide

Note: The author of this book has requested that I remove their name from this blog post. I have done so in accordance with their wishes, editing comments as well.

Update (19th March 2009)

Debate around this review is getting heated. I stand by all the points I make about the text, but I’d like to clarify a few things:

  • If there are any ad hominem comments in the review against the author, please ignore them. I’m going to try to weed out any that I find, but if you spot one, please let me know and then ignore it. I feel very strongly that a review should be about the text of a book, not about its author. The text is what will inform the reader, not the author’s other work. I’m aware that the author has written many other books, and is generally well-regarded (as far as I can tell, anyway). That neither helps nor hinders the text. The same goes for me and the review, of course. Whether you know me or not, whether you’ve read anything else I’ve written or not, the review should stand on its own merits. This is not a popularity contest – it’s a discussion about a technical book.
  • The impression I’ve given in the review is almost entirely negative. This is because that’s the impression I received as a reader interested in accuracy and best practices. That does not mean that the book is entirely inaccurate – far from it. There are plenty of aspects where I have no particular issues with the accuracy. (The code style is more uniformly disagreeable to me, but that’s a subjective matter.) However, there is enough inaccuracy (and bad practice, in my view – somewhat subjective, but less so than the code style) to make that the dominant impression left with me, alongside my surprise that there’s no proper discussion of locking. As an analogy, imagine you go to a choral concert. Suppose the sopranos, altos and tenors are all perfectly in tune, but the basses are out of key the whole time. In some senses the concert would be 75% accurate – but the 25% inaccuracy would be enough to ruin it. So it is with technical books (not just this one) – it only takes a relatively small degree of inaccuracy to make the difference between a good book and a bad one. The bottom line is: even I don’t think everything or even most of what’s written in the book is wrong; there are enough problems to make me dislike it though.
  • I’ve made a few minor edits to the review just now, to address a few comments made so far. If some of the comments appear to be odd, that may be why!

Resources

  • Publisher’s page (Packt) – this is the cheapest way to buy the book as far as I can see
  • Sample code (49MB download! Mostly because it contains bin/obj directories for all solutions…)
  • Amazon or Barnes and Noble links if you don’t want to buy it directly
  • John Mueller’s review – a much more positive review than this one, which may prove an interesting counterbalance for readers. (Thanks to Erik for pointing out John’s review in the comments.)

Disclaimer

This book doesn’t really compete with C# in Depth, but obviously the very fact that it’s another book about C# at all means I’m probably not entirely unbiased. Arguably it also “competes” with my own (somewhat out of date now) threading article, although that’s not a monetary venture for me. I should also point out that my copy was sent to me for free, specifically for review, by Packt Publishing.

Audience and content

The book claims that “Where you are a beginner to working with threads or an old hand who is looking for a reference, this book should be on your desk.” In practice, I don’t think it’s really suitable as a reference. The kind of information you really want as a reference is hard to find amidst the bulk of the book, which is on-going examples. For the rest of this review I’ll regard the intended audience as just beginners.

The first chapter (out of 12; at 388 pages one of the nice things about this books it that it’s relatively slim) is introductory material about threads and processes, and why concurrency is important in the first place. After this one code-free chapter, the rest of the book is all example-based. The pattern goes something like this:

  • Give rough idea of what we’re building
  • Create first version of the code
  • Explain what it does and why it may not be ideal
  • Improve it
  • Explain how the improvements work
  • Move to next example or add new major feature

That sounds all very well, but I’ll get to my issues with it in a minute. Although the examples are constantly evolving, they essentially break down into these applications:

  • “Code cracking” (brute-forcing a 4 character code)
  • “Encrypting” SMS messages (not real encryption – no key – but a general CPU-intensive transformation)
  • Image processing to find and highlight “old stars” in NASA images
  • “Encrypting” several files
  • More image processing – adjusting the brightness of a large image and thumbnailing it

These are all Windows Forms applications, and are frankly pretty similar, all basically dealing with simple, embarrassingly parallel tasks. That’s not to say the author doesn’t get a fair set of different techniques and lessons out of them:

  • Keeping the UI thread free (and seeing what happens when you don’t)
  • Tips for debugging multi-threaded apps in Visual Studio
  • Showing the performance for individual processes using Task Manager and Windows Explorer
  • Using BackgroundWorker to update the UI
  • Queuing tasks in the system thread pool
  • Creating new threads explicitly
  • Using Control.Invoke/BeginInvoke to update the UI (although this comes very late in the book – chapter 10 out of 12)
  • Keeping tasks independent
  • Noting that sharing data between threads is difficult – but coming to the wrong conclusions (more later)
  • Using the Timer component (just the WinForms timer; not System.Threading.Timer, System.Web.UI.Timer or System.Timers.Timer) – although later on he uses a BackgroundWorker for a task much more suitable for a Timer.
  • A bit of OO design, although in a pretty botched way – the idea of having a general-purpose “parallel algorithm” class and a “parallel algorithm piece” class is reasonable, but it isn’t handled nearly as well as it might be
  • Fairly disastrous advice (IMO) about both I/O and the GC
  • Exception “handling” (where “swallowing exceptions and just reporting them with Debug.Print” counts as “handling” apparently)
  • Parallel Extensions from .NET 4.0, with both PLINQ and TPL

Unfortunately, this misses out some of the most important concepts in parallelism on .NET. The author frequently mentions locking, but only ever in a “we’re avoiding doing it” way. I find it absolutely incredible that a book on multi-threading in C# doesn’t even mention the “lock” keyword. Okay, it’s nice to be able to split tasks up completely independently where possible, but in the real world you sometimes have to use shared mutable state (or at least, it’s often the simplest approach).

When I first got the book, I looked up several entries in the index to see how they’d be handled. I was shocked to find that none of these have an index entry:

  • lock
  • volatile
  • memory model
  • Monitor
  • Wait or Pulse
  • BeginInvoke or Invoke
  • double-checked locking
  • mutable or immutable

The concept of accessing state from multiple threads is glossed over for the entirety of the book. Basically whenever multiple threads want to make their results available, they put them in different elements of an array or list. There’s an assumption that if you read from that array/list in a different thread, it’s all okay. Likewise there’s an assumption that it’s appropriate to read integer variables written to in one thread from another thread without any locking, volatility or use of the Interlocked class. I’ll come back to this topic when I tackle accuracy later on.

Style

This is a very informal book: something I have no problem with. English clearly isn’t the author’s first language, and although I don’t blame him for some of the clumsy wording in the book (e.g. “We will not leave behind the necessary pragmatism in order to improve performance within a reasonable developing time”) I do wish the book’s editorial team had done a better job in that respect. It’s tricky with technical books: non-technical editors have good reason to be wary of going too far, as small changes in wording can have make a large difference semantically, but it does make a big difference to a book’s readability when the language is clear and idiomatic. (As a side note, I feel incredibly fortunate to have English as my native tongue. I’m not fluent in any foreign languages, and I’m often amazed at how well others manage.)

There are other elements of the style of the book which I have much more of a problem with. The first is the way that the examples are handled. A very large proportion of the book is just lists of instructions: “add some using directives: <code>; add these variables: <code>; add this procedure: <code>; add another procedure <code>; add an event handler <code>” with just a sentence or two of explanation for each one as you go. There’s much more explanation after all the code has been added, but the way that the code is given makes it very hard to see what’s going on. We almost never get to see a whole class in one listing – it’s always broken up into using directives, variables, individual methods etc. This may not be too bad if you’re following along with the book at every single point, but it makes it very hard to just read. As a friend has commented, this content might work a lot better as a screencast, rather than as a book.

One detailed gripe: nearly every time a property is introduced, the author uses the phrase “we want to create a compact and reliable class” as a justification. There’s no explanation, and quite often the properties are mutable for no good reason (when a genuinely reliable class in a multi-threaded setting would be immutable). After a while it made me want to grind my teeth every time I saw it.

The feeling is very much that of a Head First book, but one which doesn’t work. For all my misgivings about Head First C# (which I believe is now very much better now that a large number of errors have been removed) the general style was very well handled. It’s not my preferred style to start with (particularly focusing on large GUIs instead of short, complete console apps) but I rarely felt particularly lost in the listings – there was usually enough context to hold onto. Here, I feel there’s very little context at all. If you accidentally miss out a step, you’ll have a really hard time working out which one it is or what’s wrong.

On top of this, there’s the bizarre storyline “explaining” all the listings. Apparently you (the reader) originally started out cracking a code, then got hired by some other crackers, then the FBI, then NASA. We are told of FBI agents getting us capuccinos, the NASA CIO wanting you to use the Parallel Extensions CTP so that they can get free licences for Visual Studio 2010 and all kinds of other oddities. We are constantly bombarded with plaudits about our threading capabilities – by the last chapter we’re regularly being called “experts” and “threading gurus” despite the fact that we wouldn’t have a clue what was going on if someone presented us with some code using a “lock” statement. This is all patronising in the extreme – and again, Head First C# (and  I suspect the rest of the Head First series) handles the “keep it informal but drive the topic forward” aspect a lot more successfully.

Finally, on the topic of style, I’d like to rant a bit about the coding style. It’s awful. Really awful. I realise that coding standards are to some extent a personal thing, but I object to code like this:

  • Pseudo-Hungarian (the type which uses “o” as a prefix for almost any object; not the type Peter likes) and the nature of every variable (local, parameter or instance variable) makes for horrendous variable names such as “prloOutputCharLabels”. It’s not even consistent – variables added by the designer only get a type designation prefix (lbl, but, pic) but no nature prefix. Aargh.
  • Methods are frequently camel-cased instead of Pascal-cased, e.g. “showFishes” and “checkCodeChar”. It’s possible that this is only true for private methods – a very quick flick through doesn’t reveal any public ones like this – but if so it’s inconsistently applied as there are certainly Pascal-cased private methods too. Some public properties combine both annoyances so far, with names such as “poThread” and “piBegin”.
  • Most (but not all) of the time the author declares all of a method’s variables at the top, even if they’re not used for a long time. This includes declarations of variables for use in loops. This took me right back to the 80s, writing ANSI C again. I believe that the ability to declare variables at the point of first use gives a significant improvement in readability. It’s easier to see where a variable will be used if its scope is limited, for example.
  • Using directives aren’t applied nearly thoroughly enough, leaving lots of explicit use of System.Diagnostics, System.Drawing, System.ComponentModel etc. Given the line length limitations in printed books, this is a real killer in terms of providing compact, readable code.
  • Speaking of line length limitations, it would be really useful to actually acknowledge them – if a comment is going to span two printed lines, starting just the first one with “//” and leaving the second indented but not really a comment isn’t a good idea.

So, we’ve got code broken up into chunks which breaks the flow of the code, and I don’t even like the style of the code. Still, I could live with that if it’s good quality code…

Accuracy and best practices

I’ve already indicated one of the significant problems I have with the book in terms of content: its complete absence of discussion about shared data and locking. Yes, this is a beginner’s book, and I wasn’t expecting the level of detail on the memory model which is present in Joe Duffy’s book (which I promise I’ll review soon) – but I’d certainly prefer to err on the side of safety. The book regularly just accesses data on one thread having written it on another, with no locking, volatility or use of Interlocked. This isn’t the sole bad practice, however, and it’s not limited to stylistic choices either. In the course of the book, we are told all of the items below (and more). Italics indicate what the book claims; regular type indicates my response. These aren’t verbatim quotes, but paraphrase:

  • Forcing garbage collection before starting a multi-threaded operation is a good idea. This is given as a sort of response to a screenshot of Process Explorer showing ugly memory usage. In fact, I can’t reproduce the kind of nasty graph that’s shown in the book, even with the code downloaded from the web site, but if I did see that there are definitely better ways of addressing it than forcing garbage collection. Disposing of Bitmaps appropriately would be a good start… as it is, each bitmap is going to hang around for at least one garbage collection cycle longer than it should, because we’ve got to wait for its finalizer to be executed. Making sure you dispose of objects appropriately is always a good idea – explicitly forcing the garbage collector is almost always a bad one. (Not absolutely always, but usually.)
  • WaitHandle.WaitAll has to run on an MTA thread – so let’s just change the [STAThread] line above the Main method to [MTAThread], with no warning that it’s a really bad idea to do that for Windows Forms. (Side-note: when trying to check that there really isn’t a warning, I had to spend a long time finding the section. The index doesn’t contain entries for MTAThread, STAThread, apartment, WaitHandle or WaitAll. In general the index could do with a lot of work. I’m painfully aware that indexing is a horrible task, but it’s important.)
  • Application.DoEvents() is a way of letting the UI process events. This is true – it’s also another really bad idea unless you absolutely have to use it. Re-entrancy is hard to debug – and not mentioned at all in the book, as far as I can tell.
  • Data streaming is wasteful, because two threads might both want to do I/O at the same time – it’s a better idea for each thread to load all the data it needs to and then start processing it. This is stated in a context where streaming is ideal – each thread just needs to process every line in a file. (Each thread is asked to process a different file.) There’s no dependency between the lines of the file. It’s an absolute gift – the buffering and pre-fetch techniques of Windows would guess we needed the next block of data before we ask for it, so the disk would be seeking while we’re encrypting, on each thread. At least, I strongly suspect it would – and I would profile the thing instead of just claiming that we’ve managed to avoid an I/O bottleneck by loading files in their entirety up-front. No mention is made of the fact that as soon as a bunch of big files are queued for encryption, you’ll have a bunch of threads all trying to load everything before they bother starting to do anything. Avoiding I/O contention is a tricky topic, and it deserves better than a couple of misleading paragraphs with no attempt at explaining what the benefits of streaming the data would be.
  • The thread pool is used to queue threads with work to do. If there are already lots of threads busy, the new threads will wait until the old ones have finished. Note the use of “threads” here – not tasks to run on a pool of existing threads, but threads. This would make the thread pool pointless – what is never explained in the book is that creating threads is a relatively expensive business, and you don’t want to do it repeatedly for short-lived tasks when you could instead create a pool of threads and reuse them to run several tasks. Once this purpose is clear, the notion of queuing threads becomes obviously wrong.
  • We can pass some state into the delegate used for work item queuing (or a ParameterizedThreadStart) and use that to give us some context. We need to cast that state to the relevant type before we can use it, because it’s just typed as System.Object. So far so good – except most of the time, the author ends up passing into the work item the same reference which would be available as just “this” within the method itself. So we have code such as:
loPiece.poThread = new Thread(new ParameterizedThreadStart(loPiece.ThreadMethod));

loPiece.poThread.Start(loPiece);
  • The ThreadMethod method then duly casts its parameter to its own type and uses it. All of this is pointless, as the method doesn’t need any parameters – it can just use “this” inside the method.
  • It’s very important to initialize lists with the right capacity. Again, this isn’t too bad as far as it goes – except that this micro-optimisation goes awry when he reads the TextBox.Lines property twice: once to work out the appropriate capacity and once to fill the list with initial data. Unfortunately the TextBox.Lines property has to take the existing text in the TextBox, split it (creating a bunch of substrings) and get the result into an array. This in turn means doing all the normal shenanigans associated with creating buffers which are bigger than you need, filling them, copying to a new buffer etc – exactly what we’re trying to avoid! This “optimization” will usually cost time instead of saving it. It could be easily fixed by just fetching the array in one statement, then using the same array for both the count and the list population. In fact, if you just pass the array into the List<T> constructor, it will perform the optimization for you – it can detect that it’s an ICollection<T> and use the Count property directly. Writing the simplest code actually ends up being optimal.
  • The above bullet point isn’t going to dominate the performance of that example though – there’s a potentially far worse effect due to the way the resulting “encrypted” string is broken up each time: using string concatenation in a loop. I guess we’d better hope there are no really long lines. If an author is going to give optimisation “tips” they need to be a lot more rigorous than this. Using string concatenation in a loop is probably the single best-known performance no-no in .NET. I was really shocked to see this in a book which is supposedly about making your code perform better. Now, it could be that string concatenation was used deliberately to slow things down – but in that case, why not highlight it? Drawing attention to intended optimizations gives the impression that the rest of the code is either optimized or has at least been written reasonably. If “bad” code is to be used for a specific purpose, that should be called out so that the reader won’t go onto use the same kind of code in their own production apps (which really shouldn’t be deliberately slowed down).

These aren’t the only issues I have with the code. Unicode is abused by “encrypting” text with no discussion of whether the strings he produces are valid or not (as opposed to the normal practice of only encrypting data after first converting it into binary; the encrypted binary might then be converted to text using base64 if you need to transmit the encrypted data as text). We could easily end up with strings containing surrogate high or low code points without the corresponding half in the appropriate place. When analyzing a bitmap he uses GetPixel and SetPixel for each pixel, rather than calling LockBits once and then accessing the image data in a much faster manner. (The code given does scale, but it’s not as fast as it could be. Using LockBits it would still scale, but the “per thread” work would be faster.) There are other, similar issues lurking in the text, but I’m sure you get the gist of the problem.

Conclusion

Believe it or not, there are things about this book that I actually like. It’s relatively thin, which has very tangible advantages when you’re carrying it around a lot. The sections explaining has to use Process Explorer and Task Manager to their best are useful, and the ideas of the examples are good – even though they basically cover the same ground several times. Unfortunately the bad points outweight the good far too heavily. To summarise them:

  • What I consider some of the absolute core elements of .NET multithreading (locking and monitors in particular) aren’t covered at all
  • Only the simple situation of an embarrassingly parallel algorithm is covered. In the real world developers will have to face real challenges where tasks don’t always split themselves up nicely into totally independent chunks. A reader who finishes this book assured that they are now threading gurus will face a nasty shock.
  • Server-side threading isn’t given much coverage at all, despite this being arguably the most likely environment for developers to encounter multithreading
  • The “story” element of the prose style is childish and patronising
  • The coding style, while a personal choice, makes me wince – and is particularly verbose for a book, where space is important
  • Many bad practices are encouraged, and there are plenty of important misunderstandings to trip up readers
  • The index has failed me (even when I’ve known that the subject is in the book) more times than it’s helped me

It’s a real pity. I was hoping this would be a book I could recommend to people as a precursor to reading Joe Duffy’s excellent Concurrent Programming on Windows. Instead, my current best advice is to read Joe Albahari’s threading tutorial. (I previously had a link to my own threading tutorial as well, but apparently this made people think I was fishing for more readers of that.) I’m sure there are good introductory threading books out there, but I’m afraid this isn’t one of them.

DotNetRocks interview

Last Monday evening I had a chat with the guys from DotNetRocks, and today the show has gone live.

I wouldn’t claim to have said anything particularly earth-shattering, and regular readers will probably be familiar with many of the themes anyway, but I thoroughly enjoyed it and hope you will too. Amongst other things, we talked about:

  • Protocol buffers
  • Implicit typing and anonymous types
  • Why it doesn’t bother me that Office hasn’t been ported to .NET
  • C# 4
  • My wishlist for C#
  • Threading and Parallel Extensions
  • Working for Google
  • How to learn LINQ
  • C# in Depth

Feedback welcome. And yes, I know I sound somewhat like a stereotypical upper-class idiot at times. Unfortunately there’s not a lot I can do about that. Only the “idiot” part is accurate :)

Parallel Extensions June CTP

Either my timing is great or it’s lousy – you decide. Yesterday I posted about parallelising Conway’s Life – and today the new CTP for the Parallel Extensions library comes out! The bad news is that it meant I had to run all the tests again… the good news is that it means we can see whether or not the team’s work over the last 6 months has paid off.

Breaking change

The only breaking change I’ve seen is that AsParallel() no longer takes a ParallelQueryOptions parameter – instead, you call AsOrdered() on the value returned from AsParallel(). It was an easy change to make, and worked first time. There may well be plenty of other breaking API changes which are more significant, of course – I’m hardly using any of it.

Benchmark comparisons

One of the nice things about having the previous blog entries is that I can easily compare the results of how things were then with how they are now. Here are the test results for the areas of the previous blog posts which used Parallel Extensions. For the Game of Life, I haven’t included the results with the rendering for the fast version, as they’re still bound by rendering (unsurprisingly).

Mandelbrot set

(Original post)

Results are in milliseconds taken to plot the whole image, so less is better. (x;y) values mean “Width=x, MaxIterations=y.”

Description December CTP June CTP
ParallelForLoop (1200;200) 376 380
ParallelForLoop (3000;200) 2361 2394
ParallelForLoop (1200;800) 1292 1297
ParallelLinqRowByRowInPlace (1200;200) 378 393
ParallelLinqRowByRowInPlace (3000;200) 2347 2440
ParallelLinqRowByRowInPlace (1200;800) 1295 1939
ParallelLinqRowByRowWithCopy (1200;200) 382 411
ParallelLinqRowByRowWithCopy (3000;200) 2376 2484
ParallelLinqRowByRowWithCopy (1200;800) 1288 1401
ParallelLinqWithGenerator (1200;200) 4782 4868
ParallelLinqWithGenerator (3000;200) 29752 31366
ParallelLinqWithGenerator (1200;800) 16626 16855
ParallelLinqWithSequenceOfPoints (1200;200) 549 533
ParallelLinqWithSequenceOfPoints (3000;200) 3413 3290
ParallelLinqWithSequenceOfPoints (1200;800) 1462 1460
UnorderedParalleLinqInPlace (1200;200) 422 440
UnorderedParalleLinqInPlace (3000;200) 2586 2775
UnorderedParalleLinqInPlace (1200;800) 1317 1475
UnorderedParallelLinqInPlaceWithDelegate (1200;200) 509 514
UnorderedParallelLinqInPlaceWithDelegate (3000;200) 3093 3134
UnorderedParallelLinqInPlaceWithDelegate (1200;800) 1392 1571
UnorderedParallelLinqInPlaceWithGenerator (1200;200) 5046 5511
UnorderedParallelLinqInPlaceWithGenerator (3000;200) 31657 30258
UnorderedParallelLinqInPlaceWithGenerator (1200;800) 17026 19517
UnorderedParallelLinqSimple (1200;200) 556 595
UnorderedParallelLinqSimple (3000;200) 3449 3700
UnorderedParallelLinqSimple (1200;800) 1448 1506
UnorderedParalelLinqWithStruct (1200;200) 511 534
UnorderedParalelLinqWithStruct (3000;200) 3227 3154
UnorderedParalelLinqWithStruct (1200;800) 1427 1445

A mixed bag, but overall it looks to me like the June CTP was slightly worse than the older one. Of course, that’s assuming that everything else on my computer is the same as it was a couple of weeks ago, etc. I’m not going to claim it’s a perfect benchmark by any means. Anyway, can it do any better with the Game of Life?

Game of Life

(Original post)

Results are in frames per second, so more is better.

Description December CTP June CTP
ParallelFastInteriorBoard (rendered) 23 22
ParallelFastInteriorBoard (unrendered) 29 28
ParallelFastInteriorBoard 508 592

Yes, ParallelFastInteriorBoard really did get that much speed bump, apparently. I have no idea why… which leads me to the slightly disturbing conclusion of this post:

Conclusion

The numbers above don’t mean much. At least, they’re not particularly useful because I don’t understand them. Why would a few tests become “slightly significantly worse” and one particular test get markedly better? Do I have serious benchmarking issues in terms of my test rig? (I’m sure it could be a lot “cleaner” – I didn’t think it would make very much difference.)

I’ve always been aware that off-the-cuff benchmarking like this was of slightly dubious value, at least in terms of the details. The only facts which are really useful here are the big jumps due to algorithm etc. The rest is noise which may interest Joe and the team (and hey, if it proves to be useful, that’s great) but which is beyond my ability to reason about. Modern machines are complicated beasts, with complicated operating systems and complicated platforms above them.

So ignore the numbers. Maybe the new CTP is faster for your app, maybe it’s not. It is important to know it’s out there, and that if you’re interested in parallelisation but haven’t played with it yet, this is a good time to pick it up.

More parallelisation fun: Conway’s Game of Life

Okay, so I’ve probably exhausted the Mandelbrot set as a source of interest, at least for the moment. However, every so often someone mentions something or other which sounds like a fun exercise in parallelisation. The latest one is Conway’s Game of Life. Like the Mandelbrot set, this is something I used to play with when I was a teenager – and like the Mandelbrot set, it’s an embarrassingly parallel problem.

As before, I’ve written a few implementations but not worked excessively hard to optimise. All my tests were performed with a game board of 1000×500 cells, displaying at one pixel per cell (partly because that was the easiest way to draw it). I found that with the slower implementations the rendering side was irrelevant to the results, but the rendering became a bottleneck with the fast algorithms, so I’ve included results both with and without rendering.

The “benchmark” code (I use the word very lightly – normal caveats around methodology apply, this was only a weekend evenings project after all) records the “current” number of frames per second roughly once per second, and also an average over the whole run. (The fast algorithm jump around in fps pretty wildly depending on what’s going on – the more naïve ones don’t really change much between a busy board and a dead one.) The starting pattern in all cases was the R-pentomino (or F-pentomino, depending on whose terminology you use) in the middle of the board.

A base class (BytePerPixelBoard) is used by all the implementations – this represents the board with a single byte per pixel, which is really easy to work with but obviously very wasteful in terms of memory (and therefore cache). I haven’t investigated any other representations, but I wanted to get this post out fairly quickly as I have a lot of other stuff to do at the moment. (I also have a post on Java closures which I should tidy up soon, but hey…)

Implementation smells

All the source code is available for download, of course. There are no fancy options for verifying the results or even turning off rendering – just comment out the line in Program.BackgroundLoop which calls Render.

It would be fairly easy to make ParallelFastInteriorBoard derive from SimpleFastInteriorBoard, and ParallelActiveBlockBoard derive from ActiveBlockBoard, but I’ve kept the implementations completely separate for the moment. That means a huge amount of duplication, of course, including a nested class in each of the “active block” implementations – the nested classes are exactly the same. Oh, and they use internal fields instead of properties. I wanted the fields to be read-only, so I couldn’t use automatic properties – but I didn’t want to go to the hassle of having explicitly declared fields and separate properties. Trust me, I wouldn’t do this in production code. (There were some other internal fields, but they’ve mercifully been converted into properties now.)

SimpleBoard

This is about as simple as you can get. It fetches each cell’s current value “carefully” (i.e. coping with wrapping) regardless of where it is on the board. It works, but it’s almost painfully slow.

SimpleFastInteriorBoard

This is just an optimisation of SimpleBoard. We fetch the borders of the board “slowly” as per SimpleBoard, but once we’re on the interior of the board (any cell that’s not on an edge) we know we won’t need to worry about wrapping, so we can just use a fixed set of array offsets relative to the offset of the cell that’s being calculated.

This ends up being significantly faster, although it could still be optimised further. In the current code each cell is tested for whether it’s an interior cell or not. There’s a slight optimisation by remembering the results for “am I looking at the top or the bottom row” for a whole row, but by calculating just the edges and then the interior (or vice versa) a lot of tests could be avoided.

ParallelFastInteriorBoard

Behold the awesome power of Parallel.For. Parallelising SimpleFastInteriorBoard literally took about a minute. I haven’t seen any toolkit other rival this in terms of ease of use. Admittedly I haven’t done much embarrassingly parallel work in many other toolkits, but even so it’s impressive. It’ll be interesting to see how easy it is to use for complex problems as well as simple ones.

If you look at the results, you’ll see this is the first point at which there’s a difference between rendering and not. As the speed of the calculation improves the constant time taken for rendering becomes more significant.

ActiveBlockBoard

This is a departure in terms of implementation. We still use the same backing store (one big array of a byte per cell) but the board is logically broken up into blocks of 25×25 cells. (The size is hard-coded, but only as two constants – they can easily be changed. I haven’t investigated the effect of different block sizes thoroughly, but a few ad hoc tests suggests this is a reasonable sweet spot.) When considering a block in generation n, the changes (if any) in generation n-1 are considered. If the block itself changed, it may change again. If the relevant borders of any of its neighbours changed (including corners of diagonally neighbouring blocks), the cell may change. No other changes in the board can affect the block, so if none of these conditions are met the contents of generation n-1 can be copied to generation n for that block. This is obviously a massive saving when there are large areas of stable board, either dead or with stable patterns (such as 2×2 live blocks). It doesn’t help with blinkers (3×1 blocks which oscillate between vertical and horizontal alignments) or anything similar, but it’s still a big optimisation. It does mean a bit more house-keeping in terms of remembering what changed (anywhere in the cell, the four sides, and the four corners) each generation, but that pales into insignificance when you can remove the work from a lot of blocks.

Again, my implementation is relatively simple – but it still took a lot of work to get right! The code is much more complicated than SimpleFastInteriorBoard, and indeed pretty much all the code from SFIB is used in ActiveBlockBoard. Again, I’m sure there are optimisations that could be made to it, and alternative strategies for implementing it. For instance, I did toy with some code which didn’t keep track of what had changed in the block, but instead pushed changes to relevant neighbouring blocks, explicitly saying, “You need to recalculate next time!” There wasn’t much difference in speed, however, and it needed an extra setup stage in order to make the code understandable, so I’ve removed it.

The performance improvement of this is staggering – it makes it over 20 times faster than the single-threaded SimpleFastInteriorBoard. At first the improvement was much smaller – until I realised that actually I’d moved the bottleneck to the rendering, and re-benchmarked with rendering turned off.

ParallelActiveBlockBoard

Exactly the same implementation as ActiveBlockBoard, but using a Parallel.ForEach loop. Again, blissfully easy to implement, and very effective. It didn’t have the same near-doubling effect of SimpleFastInteriorBoard to ParallelFastInteriorBoard (with rendering turned off) but I suspect this is because the amount of housekeeping work required to parallelise things starts becoming more important at the rates shown in the results. It’s still impressive stuff though.

Results

Implementation FPS with rendering FPS without rendering
SimpleBoard 4 4
SimpleFastInteriorBoard 15 15
ParallelFastInteriorBoard 23 29
ActiveBlockBoard 78 326
ParallelActiveBlockBoard 72 508

Conclusions

  • The Parallel Extensions library rocks my world. Massive, massive kudos to Joe Duffy and the team.
  • I have no idea why ParallelActiveBlockBoard is slower than ActiveBlockBoard when rendering is enabled. This was repeatable (with slightly different results each time, but the same trend).
  • Always be aware that reducing one bottleneck may mean something else becomes your next bottleneck – at first I thought that ActiveBlockBoard was “only” 5 times faster than SimpleFastInteriorBoard; it was only when parallelisation showed no improvement (just the opposite) that I started turning rendering off.
  • Changing the approach can have more pronounced effects than adding cores – moving to the “active block” model gave a massive improvement over brute force.
  • Micro-optimisation has its place, when highly targeted and backed up with data – arguably the “fast interior” is a micro-optimisation, but again the improvement is quite dramatic.
  • Benchmarks are fun, but shouldn’t be taken to be indicative of anything else.
  • Did I mention that Parallel Extensions rocks?

Mandelbrot revisited – benchmark edition

I’ve had fun with the Mandelbrot set in this blog before, using it as an example of an embarrassingly parallelisable problem and demonstrating Parallel LINQ with it.

This morning, over breakfast, I described the problem to Christian Brunschen, a colleague of mine who has some parallelisation experience through implementing OpenMP in a portable manner. He immediately suggested a few possible changes to the way that I’ve approached things. Given the number of different attempts I’ve now had, I thought it would make sense to write a litte benchmarking app which could easily be expanded as further implementations were considered. The benchmark is available for download so you can play around with it yourself. (As is often the case with this kind of thing, it’s not the nicest code in the world for various reasons – the duplication of the sequence generation method, for example… Please don’t use it as a tutorial on actual coding and organisation.)

Benchmark implementation details

The benchmark allows you to vary the size of the generated image and the maximum number of iterations per point. The images can be displayed after the test run, but only the time taken to populate a byte array is recorded. The byte arrays are all allocated before any tests are run, and the garbage collector is invoked (as far as it can be) between tests. The images themselves are only generated after the tests have all completed. Each implementation is given a tiny “dummy run” (creating an image 10 pixels across, with a maximum of 2 iterations per point) first to hopefully remove JITting from the benchmarking times. I won’t pretend that this puts everything on a completely level playing field (benchmarking is hard) but hopefully it’s a good start. We check the similarity between the results of each test and the first one – in some cases they could be “nearly all the same” without there being a bug, due to the subtleties of floating point operations and inlining.

The base class that all the generators use takes care of working out the height from the width, remembering the various configuration options, allocating the array, and also providing a simple imperative method to obtain the value of a single point, without using anything fancy.

I originally wanted to put the whole thing in a nice UI, but after wasting nearly an hour trying to get WPF to behave, I decided to go for a simple console app. One day I really must learn how to write GUIs quickly…

Okay, enough introduction – let’s look at the implementations we’re testing, and then the results. The idea is to get a look at how a number of areas influence the results, as well as seeing some nifty ways of expressing things functionally.

SingleThreadImperativeSimple

This is the most trivial code you could write, basically. As the base class provides the calculation method, we just go through each row and column, work out the value, and store it in the array. The only attempt at optimisation is keeping the “current index” into the array rather than calculating it for every point.

 

public override void Generate()
{
    int index = 0;
    for (int row = 0; row < Height; row++)
    {
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndex(row, col);
        }
    }
}

 

where ComplexMandelbrotIndex is in the base class:

 

protected byte ComputeMandelbrotIndex(int row, int col)
{
    double x = (col * SampleWidth) / Width + OffsetX;
    double y = (row * SampleHeight) / Height + OffsetY;

    double y0 = y;
    double x0 = x;

    for (int i = 0; i < MaxIterations; i++)
    {
        if (x * x + y * y >= 4)
        {
            return (byte)((i % 255) + 1);
        }
        double xtemp = x * x – y * y + x0;
        y = 2 * x * y + y0;
        x = xtemp;
    }
    return 0;
}

 

SingleThreadImperativeInline

This is largely the same code as SingleThreadImperativeSimple, but with everything inlined. Within the main body, there’s no access to anything other than local variables, and no method calls. One further optimisation is available: points which will have a value of 0 aren’t stored (we assume we’ll always start with a cleared array).

 

public override void Generate()
{
    int width = Width;
    int height = Height;
    int maxIterations = MaxIterations;
    int index = 0;
    byte[] data = Data;

    for (int row = 0; row < height; row++)
    {
        for (int col = 0; col < width; col++)
        {
            double x = (col * SampleWidth) / width + OffsetX;
            double y = (row * SampleHeight) / height + OffsetY;

            double y0 = y;
            double x0 = x;

            for (int i = 0; i < maxIterations; i++)
            {
                if (x * x + y * y >= 4)
                {
                    data[index] = (byte)((i % 255) + 1);
                    break;
                }
                double xtemp = x * x – y * y + x0;
                y = 2 * x * y + y0;
                x = xtemp;
            }
            // Leave data[index] = 0 by default
            index++;
        }
    }
}

 

MultiThreadUpFrontSplitImperative

This should be pretty efficient – work out how many cores we’ve got, split the work equally between them (as chunks of rows, which helps in terms of locality and just incrementing an index in the byte array each time), and then run. Wait until all the threads have finished, and we’re done. Shouldn’t be a lot of context switching required normally, and no synchronization is required. However, if some cores are busy with another process, we’ll end up context switching for no gain.

 

public override void Generate()
{
    int cores = Environment.ProcessorCount;

    int rowsPerCore = Height / cores;

    List<Thread> threads = new List<Thread>();
    for (int i = 0; i < cores; i++)
    {
        int firstRow = rowsPerCore * i;
        int rowsToCompute = rowsPerCore;
        if (i == cores – 1)
        {
            rowsToCompute = Height-(rowsPerCore*(cores-1));
        }
        Thread thread = new Thread(() =>
            {
                int index = firstRow * Width;
                for (int row = firstRow; row < firstRow + rowsToCompute; row++)
                {
                    for (int col = 0; col < Width; col++)
                    {
                        Data[index++] = ComputeMandelbrotIndex(row, col);
                    }
                }
            });
        thread.Start();
        threads.Add(thread);
    }

    threads.ForEach(thread => thread.Join());
}

 

MultiThreadRowFetching

Again, we use a fixed number of threads – but this time we start off with a queue of work – one task per row. The queue has to be synchronized every time we use it, and we also have to check whether or not it’s empty. Plenty of scope for lock contention here, particularly as the number of cores increases.

 

public override void Generate()
{
    Queue<int> rowsLeft = new Queue<int>(Enumerable.Range(0, Height));

    List<Thread> threads = new List<Thread>();
    for (int i = 0; i < Environment.ProcessorCount; i++)
    {
        Thread thread = new Thread(() =>
            {
                while (true)
                {
                    int row;
                    lock (rowsLeft)
                    {
                        if (rowsLeft.Count == 0)
                        {
                            break;
                        }
                        row = rowsLeft.Dequeue();
                    }
                    int index = row * Width;
                    for (int col = 0; col < Width; col++)
                    {
                        Data[index++] = ComputeMandelbrotIndex(row, col);
                    }
                }
            });
        thread.Start();
        threads.Add(thread);
    }

    threads.ForEach(thread => thread.Join());
}

SingleThreadLinqSimple

This is the first version of Mandelbrot generation I wrote since thinking of using LINQ, tweaked a litte to dump the output into an existing array instead of creating a new one. It’s essentially the same as SingleThreadImperativeSimple but with the for loops replaced with a query expression.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height)
                from col in Enumerable.Range(0, Width)
                select ComputeMandelbrotIndex(row, col);

    int index=0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

ParallelLinqRowByRowWithCopy

My first attempt using Parallel LINQ was somewhat disastrous due to the nature of PLINQ’s ordering. To combat this effect, I initially computed a row at a time, then copying each row into place afterwards:

 

private byte[] ComputeMandelbrotRow(int row)
{
    byte[] ret = new byte[Width];
    for (int col = 0; col < Width; col++)
    {
        ret[col] = ComputeMandelbrotIndex(row, col);
    }
    return ret;
}

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel(ParallelQueryOptions.PreserveOrdering)
                select ComputeMandelbrotRow(row);

    int rowStart = 0;
    foreach (byte[] row in query)
    {
        Array.Copy(row, 0, Data, rowStart, Width);
        rowStart += Width;
    }
}

 

ParallelLinqRowByRowInPlace

An obvious potential improvement is to write the data to the eventual target array as we go, to avoid creating the extra array creation and copying. At this point we have less of a purely functional solution, but it’s interesting anyway. Note that to use this idea in a query expression we have to return something even though it’s not useful. Likewise we have to make the query execute fully – so I use Max() to ensure that all the results will be computed. (I originally tried Count() but that didn’t work – presumably because the count of the results is known before the actual values are.) As we’re writing the data to the right place on each iteration, we no longer need to preserve ordering.

 

private int ComputeMandelbrotRow(int row)
{
    int index = row * Width;
    for (int col = 0; col < Width; col++)
    {
        Data[index++] = ComputeMandelbrotIndex(row, col);
    }
    return 0;
}

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                select ComputeMandelbrotRow(row);

    query.Max();
}

 

ParallelLinqWithSequenceOfPoints

After this initial foray into Parallel LINQ, Nicholas Palladinos suggested that I could adapt the original idea in a more elegant manner by treating the sequence of points as a single input sequence, and asking Parallel LINQ to preserve the order of that whole sequence.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points.AsParallel(ParallelQueryOptions.PreserveOrdering)
                select ComputeMandelbrotIndex(point.row, point.col);

    int index=0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

SingleThreadLinqWithGenerator

My next step was to try to put the whole guts of the algorithm into the query expression, using a Complex value type and a generator to create a sequence of complex numbers generated from the starting point. This is quite a natural step, as the value is computed by just considering this infinite sequence and seeing how quickly it escapes from the circle of radius 2 on the complex plane. Aside from anything else, I think it’s a nice example of deferred execution and streaming – the sequence really would go on forever if we didn’t limit it with the Take and TakeWhile calls, but the generator itself doesn’t know it’s being limited. This version uses a sequence of points as per ParallelLinqWithSequenceOfPoints to make it easier to parallelise later.

It’s not exactly an efficient way of doing things though – there’s a huge amount of calling going on from one iterator to another. I’m actually quite surprised that the final results aren’t worse than they are.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points
                // Work out the initial complex value from the row and column
                let c = new Complex((point.col * SampleWidth) / Width + OffsetX,
                                    (point.row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                select GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                          .Take(MaxIterations)
                                                          .Count() into count
                // Map that to an appropriate byte value
                select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    int index = 0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

public static IEnumerable<T> GenerateSequence<T>(T start, Func<T, T> step)
{
    T value = start;
    while (true)
    {
        yield return value;
        value = step(value);
    }
}

 

ParallelLinqWithGenerator

From the previous implementation, parallelisation is simple.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points.AsParallel(ParallelQueryOptions.PreserveOrdering)
                // Work out the initial complex value from the row and column
                let c = new Complex((point.col * SampleWidth) / Width + OffsetX,
                                    (point.row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                select GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                          .Take(MaxIterations)
                                                          .Count() into count
                // Map that to an appropriate byte value
                select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    int index = 0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

SingleThreadImperativeWithComplex

Having already seen how slow the generator version was in the past, I thought it would be worth checking whether or not this was due to the use of the Complex struct – so this is a version which uses that, but is otherwise just like the very first implementation (SingleThreadImperativeSimple).

 

public override void Generate()
{
    int index = 0;
    for (int row = 0; row < Height; row++)
    {
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndexWithComplex(row, col);
        }
    }
}

byte ComputeMandelbrotIndexWithComplex(int row, int col)
{
    double x = (col * SampleWidth) / Width + OffsetX;
    double y = (row * SampleHeight) / Height + OffsetY;

    Complex start = new Complex(x, y);
    Complex current = start;

    for (int i = 0; i < MaxIterations; i++)
    {
        if (current.SquareLength >= 4)
        {
            return (byte)((i % 255) + 1);
        }
        current = current * current + start;
    }
    return 0;
}

 

UnorderedParallelLinqSimple

This is where my colleague, Christian, came in. He suggested that instead of ordering the results, we just return the point as well as its value. We can then populate the array directly, regardless of the order of the results. The first version just uses an anonymous type to represent the combination:

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                select new { row, col, value = ComputeMandelbrotIndex(row, col) };

    foreach (var x in query)
    {
        Data[x.row * Width + x.col] = x.value;
    }
}

 

UnorderedParallelLinqWithStruct

Creating a new garbage-collected object on the heap for each point isn’t the world’s greatest idea. A very simple transformation is to use a struct instead. Without knowing the PLINQ implementation, it seems likely that the values will still end up on the heap somehow (how else could they be communicated between threads?) but I’d still expect a certain saving from this simple step.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                select new Result (row, col, ComputeMandelbrotIndex(row, col));

    foreach (var x in query)
    {
        Data[x.Row * Width + x.Column] = x.Value;
    }
}

struct Result
{
    internal int row, col;
    internal byte value;

    internal int Row { get { return row; } }
    internal int Column { get { return col; } }
    internal byte Value { get { return value; } }

    internal Result(int row, int col, byte value)
    {
        this.row = row;
        this.col = col;
        this.value = value;
    }
}

 

UnorderedParallelLinqInPlace

Why bother returning the data at all? We can just write the data in place, like we did earlier with ParallelLinqRowByRowInPlace but in a more aggressive fashion (and the disadvantage of computing the index for each point). Again, we have to return a dummy value and iterate through those dummy results to force all the computations to go through.

 

public override void Generate()
{

    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Note side-effect!
                select Data[row*Width + col] = ComputeMandelbrotIndex(row, col);

    // Force iteration through all results
    query.Max();
}

 

UnorderedParallelLinqInPlaceWithDelegate

UnorderedParallelLinqInPlace feels slightly nasty – we’ve clearly got a side-effect within the loop, it’s just that we know the side-effects are all orthogonal. We can make ourselves feel slightly cleaner by separating the side-effect from the main algorithm, by way of a delegate. The loop can hold its hands up and say, “I’m side-effect free if you are.” It’s not hugely satisfactory, but it’ll be interesting to see the penalty we pay for this attempt to be purer.

 

IEnumerable<T> Generate<T>(Func<int, int, T> transformation)
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Side-effect only if transformation contains one…
                select transformation(row, col);

    return query;
}

public override void Generate()
{
    // Transformation with side-effect
    Func<int,int,byte> transformation = (row, col) => Data[row * Width + col] = ComputeMandelbrotIndex(row, col);

    Generate(transformation).Max();
}

 

UnorderedParallelLinqInPlaceWithGenerator

We can easily combine the earlier generator code with any of these new ways of processing the results. Here we use our “algorithm in the query” approach but process the results as we go to avoid ordering issues.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Work out the initial complex value from the row and column
                let c = new Complex((col * SampleWidth) / Width + OffsetX,
                                    (row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                let count = GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                               .Take(MaxIterations)
                                                               .Count()
                // Map that to an appropriate byte value – and write it in place
                select Data[row * Width + col]  = (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    // Force execution
    query.Max();
}

 

ParallelForLoop

The Parallel Extensions library doesn’t just contain Parallel LINQ. It also has various other building blocks for parallelism, including a parallel for loop. This allows very simple parallelism of our very first generator – we just turn the outside loop into a parallel for loop, turning the previous inner loop into a delegate. We need to move the index variable into the outer loop so there’ll be one per row (otherwise they’d trample on each other) but that’s about it:

 

public override void Generate()
{
    Parallel.For(0, Height, row =>
    {
        int index = row * Width;
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndex(row, col);
        }
    });
}

 

Results

A benchmark is nothing without results, so here they are on my dual core laptop, from three test runs. The first is the “default” settings I used to develop the benchmark – nothing hugely strenuous, but enough to see differences. I then tried a larger image with the same maximum number of iterations, then the original size with a larger number of iterations. The results are in alphabetical order because that’s how the test prints them :) Times are in milliseconds.

Implementation Width=1200; MaxIterations=200 Width=3000; MaxIterations=200 Width=1200; MaxIterations=800
MultiThreadRowFetching 380 2479 1311
MultiThreadUpFrontSplitImperative 384 2545 2088
ParallelForLoop 376 2361 1292
ParallelLinqRowByRowInPlace 378 2347 1295
ParallelLinqRowByRowWithCopy 382 2376 1288
ParallelLinqWithGenerator 4782 29752 16626
ParallelLinqWithSequenceOfPoints 549 3413 1462
SingleThreadImperativeInline 684 4352 2455
SingleThreadImperativeSimple 704 4353 2372
SingleThreadImperativeWithComplex 2795 16720 9800
SingleThreadLinqSimple 726 4522 2438
SingleThreadLinqWithGenerator 8902 52586 30075
UnorderedParalleLinqInPlace 422 2586 1317
UnorderedParallelLinqInPlaceWithDelegate 509 3093 1392
UnorderedParallelLinqInPlaceWithGenerator 5046 31657 17026
UnorderedParallelLinqSimple 556 3449 1448
UnorderedParalelLinqWithStruct 511 3227 1427

Conclusions

So, what have we learned? Well… bearing in mind that benchmarks like this are often misleading compared with real applications, etc it’s still interesting that:

  • Parallel Extensions rocks. If I were trying to include a Mandelbrot generation implementation in a production setting, I’d definitely go for the parallel for loop – it’s simple, but works just as well as anything else.
  • The micro-optimisation of SingleThreadImperativeInline really doesn’t help much, but makes the code harder to understand – just like so many micro-optimisations.
  • The “generator” form of LINQ really doesn’t perform well at all. It does parallelise pretty well, as you’d expect, but it’s just plain slow.
  • Part of the slowness is almost certainly due to the use of the Complex struct, given the results of SingleThreadImperativeWithComplex. Not sure what’s going on there.
  • The extra abstraction step from UnorderedParallelLinqInPlace to UnorderedParallelLinqInPlaceWithDelegate does have a significant impact, at least when the maximum number of iterations per point isn’t the dominant force. That doesn’t mean it would be a bad idea in production, of course – just somewhere to consider when running against performance issues.

I suspect I’ve missed some notable points though – comments?