Category Archives: Parallelization

Parallel Extensions June CTP

Either my timing is great or it’s lousy – you decide. Yesterday I posted about parallelising Conway’s Life – and today the new CTP for the Parallel Extensions library comes out! The bad news is that it meant I had to run all the tests again… the good news is that it means we can see whether or not the team’s work over the last 6 months has paid off.

Breaking change

The only breaking change I’ve seen is that AsParallel() no longer takes a ParallelQueryOptions parameter – instead, you call AsOrdered() on the value returned from AsParallel(). It was an easy change to make, and worked first time. There may well be plenty of other breaking API changes which are more significant, of course – I’m hardly using any of it.

Benchmark comparisons

One of the nice things about having the previous blog entries is that I can easily compare the results of how things were then with how they are now. Here are the test results for the areas of the previous blog posts which used Parallel Extensions. For the Game of Life, I haven’t included the results with the rendering for the fast version, as they’re still bound by rendering (unsurprisingly).

Mandelbrot set

(Original post)

Results are in milliseconds taken to plot the whole image, so less is better. (x;y) values mean “Width=x, MaxIterations=y.”

Description December CTP June CTP
ParallelForLoop (1200;200) 376 380
ParallelForLoop (3000;200) 2361 2394
ParallelForLoop (1200;800) 1292 1297
ParallelLinqRowByRowInPlace (1200;200) 378 393
ParallelLinqRowByRowInPlace (3000;200) 2347 2440
ParallelLinqRowByRowInPlace (1200;800) 1295 1939
ParallelLinqRowByRowWithCopy (1200;200) 382 411
ParallelLinqRowByRowWithCopy (3000;200) 2376 2484
ParallelLinqRowByRowWithCopy (1200;800) 1288 1401
ParallelLinqWithGenerator (1200;200) 4782 4868
ParallelLinqWithGenerator (3000;200) 29752 31366
ParallelLinqWithGenerator (1200;800) 16626 16855
ParallelLinqWithSequenceOfPoints (1200;200) 549 533
ParallelLinqWithSequenceOfPoints (3000;200) 3413 3290
ParallelLinqWithSequenceOfPoints (1200;800) 1462 1460
UnorderedParalleLinqInPlace (1200;200) 422 440
UnorderedParalleLinqInPlace (3000;200) 2586 2775
UnorderedParalleLinqInPlace (1200;800) 1317 1475
UnorderedParallelLinqInPlaceWithDelegate (1200;200) 509 514
UnorderedParallelLinqInPlaceWithDelegate (3000;200) 3093 3134
UnorderedParallelLinqInPlaceWithDelegate (1200;800) 1392 1571
UnorderedParallelLinqInPlaceWithGenerator (1200;200) 5046 5511
UnorderedParallelLinqInPlaceWithGenerator (3000;200) 31657 30258
UnorderedParallelLinqInPlaceWithGenerator (1200;800) 17026 19517
UnorderedParallelLinqSimple (1200;200) 556 595
UnorderedParallelLinqSimple (3000;200) 3449 3700
UnorderedParallelLinqSimple (1200;800) 1448 1506
UnorderedParalelLinqWithStruct (1200;200) 511 534
UnorderedParalelLinqWithStruct (3000;200) 3227 3154
UnorderedParalelLinqWithStruct (1200;800) 1427 1445

A mixed bag, but overall it looks to me like the June CTP was slightly worse than the older one. Of course, that’s assuming that everything else on my computer is the same as it was a couple of weeks ago, etc. I’m not going to claim it’s a perfect benchmark by any means. Anyway, can it do any better with the Game of Life?

Game of Life

(Original post)

Results are in frames per second, so more is better.

Description December CTP June CTP
ParallelFastInteriorBoard (rendered) 23 22
ParallelFastInteriorBoard (unrendered) 29 28
ParallelFastInteriorBoard 508 592

Yes, ParallelFastInteriorBoard really did get that much speed bump, apparently. I have no idea why… which leads me to the slightly disturbing conclusion of this post:

Conclusion

The numbers above don’t mean much. At least, they’re not particularly useful because I don’t understand them. Why would a few tests become “slightly significantly worse” and one particular test get markedly better? Do I have serious benchmarking issues in terms of my test rig? (I’m sure it could be a lot “cleaner” – I didn’t think it would make very much difference.)

I’ve always been aware that off-the-cuff benchmarking like this was of slightly dubious value, at least in terms of the details. The only facts which are really useful here are the big jumps due to algorithm etc. The rest is noise which may interest Joe and the team (and hey, if it proves to be useful, that’s great) but which is beyond my ability to reason about. Modern machines are complicated beasts, with complicated operating systems and complicated platforms above them.

So ignore the numbers. Maybe the new CTP is faster for your app, maybe it’s not. It is important to know it’s out there, and that if you’re interested in parallelisation but haven’t played with it yet, this is a good time to pick it up.

More parallelisation fun: Conway’s Game of Life

Okay, so I’ve probably exhausted the Mandelbrot set as a source of interest, at least for the moment. However, every so often someone mentions something or other which sounds like a fun exercise in parallelisation. The latest one is Conway’s Game of Life. Like the Mandelbrot set, this is something I used to play with when I was a teenager – and like the Mandelbrot set, it’s an embarrassingly parallel problem.

As before, I’ve written a few implementations but not worked excessively hard to optimise. All my tests were performed with a game board of 1000×500 cells, displaying at one pixel per cell (partly because that was the easiest way to draw it). I found that with the slower implementations the rendering side was irrelevant to the results, but the rendering became a bottleneck with the fast algorithms, so I’ve included results both with and without rendering.

The “benchmark” code (I use the word very lightly – normal caveats around methodology apply, this was only a weekend evenings project after all) records the “current” number of frames per second roughly once per second, and also an average over the whole run. (The fast algorithm jump around in fps pretty wildly depending on what’s going on – the more na├»ve ones don’t really change much between a busy board and a dead one.) The starting pattern in all cases was the R-pentomino (or F-pentomino, depending on whose terminology you use) in the middle of the board.

A base class (BytePerPixelBoard) is used by all the implementations – this represents the board with a single byte per pixel, which is really easy to work with but obviously very wasteful in terms of memory (and therefore cache). I haven’t investigated any other representations, but I wanted to get this post out fairly quickly as I have a lot of other stuff to do at the moment. (I also have a post on Java closures which I should tidy up soon, but hey…)

Implementation smells

All the source code is available for download, of course. There are no fancy options for verifying the results or even turning off rendering – just comment out the line in Program.BackgroundLoop which calls Render.

It would be fairly easy to make ParallelFastInteriorBoard derive from SimpleFastInteriorBoard, and ParallelActiveBlockBoard derive from ActiveBlockBoard, but I’ve kept the implementations completely separate for the moment. That means a huge amount of duplication, of course, including a nested class in each of the “active block” implementations – the nested classes are exactly the same. Oh, and they use internal fields instead of properties. I wanted the fields to be read-only, so I couldn’t use automatic properties – but I didn’t want to go to the hassle of having explicitly declared fields and separate properties. Trust me, I wouldn’t do this in production code. (There were some other internal fields, but they’ve mercifully been converted into properties now.)

SimpleBoard

This is about as simple as you can get. It fetches each cell’s current value “carefully” (i.e. coping with wrapping) regardless of where it is on the board. It works, but it’s almost painfully slow.

SimpleFastInteriorBoard

This is just an optimisation of SimpleBoard. We fetch the borders of the board “slowly” as per SimpleBoard, but once we’re on the interior of the board (any cell that’s not on an edge) we know we won’t need to worry about wrapping, so we can just use a fixed set of array offsets relative to the offset of the cell that’s being calculated.

This ends up being significantly faster, although it could still be optimised further. In the current code each cell is tested for whether it’s an interior cell or not. There’s a slight optimisation by remembering the results for “am I looking at the top or the bottom row” for a whole row, but by calculating just the edges and then the interior (or vice versa) a lot of tests could be avoided.

ParallelFastInteriorBoard

Behold the awesome power of Parallel.For. Parallelising SimpleFastInteriorBoard literally took about a minute. I haven’t seen any toolkit other rival this in terms of ease of use. Admittedly I haven’t done much embarrassingly parallel work in many other toolkits, but even so it’s impressive. It’ll be interesting to see how easy it is to use for complex problems as well as simple ones.

If you look at the results, you’ll see this is the first point at which there’s a difference between rendering and not. As the speed of the calculation improves the constant time taken for rendering becomes more significant.

ActiveBlockBoard

This is a departure in terms of implementation. We still use the same backing store (one big array of a byte per cell) but the board is logically broken up into blocks of 25×25 cells. (The size is hard-coded, but only as two constants – they can easily be changed. I haven’t investigated the effect of different block sizes thoroughly, but a few ad hoc tests suggests this is a reasonable sweet spot.) When considering a block in generation n, the changes (if any) in generation n-1 are considered. If the block itself changed, it may change again. If the relevant borders of any of its neighbours changed (including corners of diagonally neighbouring blocks), the cell may change. No other changes in the board can affect the block, so if none of these conditions are met the contents of generation n-1 can be copied to generation n for that block. This is obviously a massive saving when there are large areas of stable board, either dead or with stable patterns (such as 2×2 live blocks). It doesn’t help with blinkers (3×1 blocks which oscillate between vertical and horizontal alignments) or anything similar, but it’s still a big optimisation. It does mean a bit more house-keeping in terms of remembering what changed (anywhere in the cell, the four sides, and the four corners) each generation, but that pales into insignificance when you can remove the work from a lot of blocks.

Again, my implementation is relatively simple – but it still took a lot of work to get right! The code is much more complicated than SimpleFastInteriorBoard, and indeed pretty much all the code from SFIB is used in ActiveBlockBoard. Again, I’m sure there are optimisations that could be made to it, and alternative strategies for implementing it. For instance, I did toy with some code which didn’t keep track of what had changed in the block, but instead pushed changes to relevant neighbouring blocks, explicitly saying, “You need to recalculate next time!” There wasn’t much difference in speed, however, and it needed an extra setup stage in order to make the code understandable, so I’ve removed it.

The performance improvement of this is staggering – it makes it over 20 times faster than the single-threaded SimpleFastInteriorBoard. At first the improvement was much smaller – until I realised that actually I’d moved the bottleneck to the rendering, and re-benchmarked with rendering turned off.

ParallelActiveBlockBoard

Exactly the same implementation as ActiveBlockBoard, but using a Parallel.ForEach loop. Again, blissfully easy to implement, and very effective. It didn’t have the same near-doubling effect of SimpleFastInteriorBoard to ParallelFastInteriorBoard (with rendering turned off) but I suspect this is because the amount of housekeeping work required to parallelise things starts becoming more important at the rates shown in the results. It’s still impressive stuff though.

Results

Implementation FPS with rendering FPS without rendering
SimpleBoard 4 4
SimpleFastInteriorBoard 15 15
ParallelFastInteriorBoard 23 29
ActiveBlockBoard 78 326
ParallelActiveBlockBoard 72 508

Conclusions

  • The Parallel Extensions library rocks my world. Massive, massive kudos to Joe Duffy and the team.
  • I have no idea why ParallelActiveBlockBoard is slower than ActiveBlockBoard when rendering is enabled. This was repeatable (with slightly different results each time, but the same trend).
  • Always be aware that reducing one bottleneck may mean something else becomes your next bottleneck – at first I thought that ActiveBlockBoard was “only” 5 times faster than SimpleFastInteriorBoard; it was only when parallelisation showed no improvement (just the opposite) that I started turning rendering off.
  • Changing the approach can have more pronounced effects than adding cores – moving to the “active block” model gave a massive improvement over brute force.
  • Micro-optimisation has its place, when highly targeted and backed up with data – arguably the “fast interior” is a micro-optimisation, but again the improvement is quite dramatic.
  • Benchmarks are fun, but shouldn’t be taken to be indicative of anything else.
  • Did I mention that Parallel Extensions rocks?

Mandelbrot revisited – benchmark edition

I’ve had fun with the Mandelbrot set in this blog before, using it as an example of an embarrassingly parallelisable problem and demonstrating Parallel LINQ with it.

This morning, over breakfast, I described the problem to Christian Brunschen, a colleague of mine who has some parallelisation experience through implementing OpenMP in a portable manner. He immediately suggested a few possible changes to the way that I’ve approached things. Given the number of different attempts I’ve now had, I thought it would make sense to write a litte benchmarking app which could easily be expanded as further implementations were considered. The benchmark is available for download so you can play around with it yourself. (As is often the case with this kind of thing, it’s not the nicest code in the world for various reasons – the duplication of the sequence generation method, for example… Please don’t use it as a tutorial on actual coding and organisation.)

Benchmark implementation details

The benchmark allows you to vary the size of the generated image and the maximum number of iterations per point. The images can be displayed after the test run, but only the time taken to populate a byte array is recorded. The byte arrays are all allocated before any tests are run, and the garbage collector is invoked (as far as it can be) between tests. The images themselves are only generated after the tests have all completed. Each implementation is given a tiny “dummy run” (creating an image 10 pixels across, with a maximum of 2 iterations per point) first to hopefully remove JITting from the benchmarking times. I won’t pretend that this puts everything on a completely level playing field (benchmarking is hard) but hopefully it’s a good start. We check the similarity between the results of each test and the first one – in some cases they could be “nearly all the same” without there being a bug, due to the subtleties of floating point operations and inlining.

The base class that all the generators use takes care of working out the height from the width, remembering the various configuration options, allocating the array, and also providing a simple imperative method to obtain the value of a single point, without using anything fancy.

I originally wanted to put the whole thing in a nice UI, but after wasting nearly an hour trying to get WPF to behave, I decided to go for a simple console app. One day I really must learn how to write GUIs quickly…

Okay, enough introduction – let’s look at the implementations we’re testing, and then the results. The idea is to get a look at how a number of areas influence the results, as well as seeing some nifty ways of expressing things functionally.

SingleThreadImperativeSimple

This is the most trivial code you could write, basically. As the base class provides the calculation method, we just go through each row and column, work out the value, and store it in the array. The only attempt at optimisation is keeping the “current index” into the array rather than calculating it for every point.

 

public override void Generate()
{
    int index = 0;
    for (int row = 0; row < Height; row++)
    {
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndex(row, col);
        }
    }
}

 

where ComplexMandelbrotIndex is in the base class:

 

protected byte ComputeMandelbrotIndex(int row, int col)
{
    double x = (col * SampleWidth) / Width + OffsetX;
    double y = (row * SampleHeight) / Height + OffsetY;

    double y0 = y;
    double x0 = x;

    for (int i = 0; i < MaxIterations; i++)
    {
        if (x * x + y * y >= 4)
        {
            return (byte)((i % 255) + 1);
        }
        double xtemp = x * x – y * y + x0;
        y = 2 * x * y + y0;
        x = xtemp;
    }
    return 0;
}

 

SingleThreadImperativeInline

This is largely the same code as SingleThreadImperativeSimple, but with everything inlined. Within the main body, there’s no access to anything other than local variables, and no method calls. One further optimisation is available: points which will have a value of 0 aren’t stored (we assume we’ll always start with a cleared array).

 

public override void Generate()
{
    int width = Width;
    int height = Height;
    int maxIterations = MaxIterations;
    int index = 0;
    byte[] data = Data;

    for (int row = 0; row < height; row++)
    {
        for (int col = 0; col < width; col++)
        {
            double x = (col * SampleWidth) / width + OffsetX;
            double y = (row * SampleHeight) / height + OffsetY;

            double y0 = y;
            double x0 = x;

            for (int i = 0; i < maxIterations; i++)
            {
                if (x * x + y * y >= 4)
                {
                    data[index] = (byte)((i % 255) + 1);
                    break;
                }
                double xtemp = x * x – y * y + x0;
                y = 2 * x * y + y0;
                x = xtemp;
            }
            // Leave data[index] = 0 by default
            index++;
        }
    }
}

 

MultiThreadUpFrontSplitImperative

This should be pretty efficient – work out how many cores we’ve got, split the work equally between them (as chunks of rows, which helps in terms of locality and just incrementing an index in the byte array each time), and then run. Wait until all the threads have finished, and we’re done. Shouldn’t be a lot of context switching required normally, and no synchronization is required. However, if some cores are busy with another process, we’ll end up context switching for no gain.

 

public override void Generate()
{
    int cores = Environment.ProcessorCount;

    int rowsPerCore = Height / cores;

    List<Thread> threads = new List<Thread>();
    for (int i = 0; i < cores; i++)
    {
        int firstRow = rowsPerCore * i;
        int rowsToCompute = rowsPerCore;
        if (i == cores – 1)
        {
            rowsToCompute = Height-(rowsPerCore*(cores-1));
        }
        Thread thread = new Thread(() =>
            {
                int index = firstRow * Width;
                for (int row = firstRow; row < firstRow + rowsToCompute; row++)
                {
                    for (int col = 0; col < Width; col++)
                    {
                        Data[index++] = ComputeMandelbrotIndex(row, col);
                    }
                }
            });
        thread.Start();
        threads.Add(thread);
    }

    threads.ForEach(thread => thread.Join());
}

 

MultiThreadRowFetching

Again, we use a fixed number of threads – but this time we start off with a queue of work – one task per row. The queue has to be synchronized every time we use it, and we also have to check whether or not it’s empty. Plenty of scope for lock contention here, particularly as the number of cores increases.

 

public override void Generate()
{
    Queue<int> rowsLeft = new Queue<int>(Enumerable.Range(0, Height));

    List<Thread> threads = new List<Thread>();
    for (int i = 0; i < Environment.ProcessorCount; i++)
    {
        Thread thread = new Thread(() =>
            {
                while (true)
                {
                    int row;
                    lock (rowsLeft)
                    {
                        if (rowsLeft.Count == 0)
                        {
                            break;
                        }
                        row = rowsLeft.Dequeue();
                    }
                    int index = row * Width;
                    for (int col = 0; col < Width; col++)
                    {
                        Data[index++] = ComputeMandelbrotIndex(row, col);
                    }
                }
            });
        thread.Start();
        threads.Add(thread);
    }

    threads.ForEach(thread => thread.Join());
}

SingleThreadLinqSimple

This is the first version of Mandelbrot generation I wrote since thinking of using LINQ, tweaked a litte to dump the output into an existing array instead of creating a new one. It’s essentially the same as SingleThreadImperativeSimple but with the for loops replaced with a query expression.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height)
                from col in Enumerable.Range(0, Width)
                select ComputeMandelbrotIndex(row, col);

    int index=0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

ParallelLinqRowByRowWithCopy

My first attempt using Parallel LINQ was somewhat disastrous due to the nature of PLINQ’s ordering. To combat this effect, I initially computed a row at a time, then copying each row into place afterwards:

 

private byte[] ComputeMandelbrotRow(int row)
{
    byte[] ret = new byte[Width];
    for (int col = 0; col < Width; col++)
    {
        ret[col] = ComputeMandelbrotIndex(row, col);
    }
    return ret;
}

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel(ParallelQueryOptions.PreserveOrdering)
                select ComputeMandelbrotRow(row);

    int rowStart = 0;
    foreach (byte[] row in query)
    {
        Array.Copy(row, 0, Data, rowStart, Width);
        rowStart += Width;
    }
}

 

ParallelLinqRowByRowInPlace

An obvious potential improvement is to write the data to the eventual target array as we go, to avoid creating the extra array creation and copying. At this point we have less of a purely functional solution, but it’s interesting anyway. Note that to use this idea in a query expression we have to return something even though it’s not useful. Likewise we have to make the query execute fully – so I use Max() to ensure that all the results will be computed. (I originally tried Count() but that didn’t work – presumably because the count of the results is known before the actual values are.) As we’re writing the data to the right place on each iteration, we no longer need to preserve ordering.

 

private int ComputeMandelbrotRow(int row)
{
    int index = row * Width;
    for (int col = 0; col < Width; col++)
    {
        Data[index++] = ComputeMandelbrotIndex(row, col);
    }
    return 0;
}

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                select ComputeMandelbrotRow(row);

    query.Max();
}

 

ParallelLinqWithSequenceOfPoints

After this initial foray into Parallel LINQ, Nicholas Palladinos suggested that I could adapt the original idea in a more elegant manner by treating the sequence of points as a single input sequence, and asking Parallel LINQ to preserve the order of that whole sequence.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points.AsParallel(ParallelQueryOptions.PreserveOrdering)
                select ComputeMandelbrotIndex(point.row, point.col);

    int index=0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

SingleThreadLinqWithGenerator

My next step was to try to put the whole guts of the algorithm into the query expression, using a Complex value type and a generator to create a sequence of complex numbers generated from the starting point. This is quite a natural step, as the value is computed by just considering this infinite sequence and seeing how quickly it escapes from the circle of radius 2 on the complex plane. Aside from anything else, I think it’s a nice example of deferred execution and streaming – the sequence really would go on forever if we didn’t limit it with the Take and TakeWhile calls, but the generator itself doesn’t know it’s being limited. This version uses a sequence of points as per ParallelLinqWithSequenceOfPoints to make it easier to parallelise later.

It’s not exactly an efficient way of doing things though – there’s a huge amount of calling going on from one iterator to another. I’m actually quite surprised that the final results aren’t worse than they are.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points
                // Work out the initial complex value from the row and column
                let c = new Complex((point.col * SampleWidth) / Width + OffsetX,
                                    (point.row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                select GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                          .Take(MaxIterations)
                                                          .Count() into count
                // Map that to an appropriate byte value
                select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    int index = 0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

public static IEnumerable<T> GenerateSequence<T>(T start, Func<T, T> step)
{
    T value = start;
    while (true)
    {
        yield return value;
        value = step(value);
    }
}

 

ParallelLinqWithGenerator

From the previous implementation, parallelisation is simple.

 

public override void Generate()
{
    var points = from row in Enumerable.Range(0, Height)
                 from col in Enumerable.Range(0, Width)
                 select new { row, col };

    var query = from point in points.AsParallel(ParallelQueryOptions.PreserveOrdering)
                // Work out the initial complex value from the row and column
                let c = new Complex((point.col * SampleWidth) / Width + OffsetX,
                                    (point.row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                select GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                          .Take(MaxIterations)
                                                          .Count() into count
                // Map that to an appropriate byte value
                select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    int index = 0;
    foreach (byte b in query)
    {
        Data[index++] = b;
    }
}

 

SingleThreadImperativeWithComplex

Having already seen how slow the generator version was in the past, I thought it would be worth checking whether or not this was due to the use of the Complex struct – so this is a version which uses that, but is otherwise just like the very first implementation (SingleThreadImperativeSimple).

 

public override void Generate()
{
    int index = 0;
    for (int row = 0; row < Height; row++)
    {
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndexWithComplex(row, col);
        }
    }
}

byte ComputeMandelbrotIndexWithComplex(int row, int col)
{
    double x = (col * SampleWidth) / Width + OffsetX;
    double y = (row * SampleHeight) / Height + OffsetY;

    Complex start = new Complex(x, y);
    Complex current = start;

    for (int i = 0; i < MaxIterations; i++)
    {
        if (current.SquareLength >= 4)
        {
            return (byte)((i % 255) + 1);
        }
        current = current * current + start;
    }
    return 0;
}

 

UnorderedParallelLinqSimple

This is where my colleague, Christian, came in. He suggested that instead of ordering the results, we just return the point as well as its value. We can then populate the array directly, regardless of the order of the results. The first version just uses an anonymous type to represent the combination:

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                select new { row, col, value = ComputeMandelbrotIndex(row, col) };

    foreach (var x in query)
    {
        Data[x.row * Width + x.col] = x.value;
    }
}

 

UnorderedParallelLinqWithStruct

Creating a new garbage-collected object on the heap for each point isn’t the world’s greatest idea. A very simple transformation is to use a struct instead. Without knowing the PLINQ implementation, it seems likely that the values will still end up on the heap somehow (how else could they be communicated between threads?) but I’d still expect a certain saving from this simple step.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                select new Result (row, col, ComputeMandelbrotIndex(row, col));

    foreach (var x in query)
    {
        Data[x.Row * Width + x.Column] = x.Value;
    }
}

struct Result
{
    internal int row, col;
    internal byte value;

    internal int Row { get { return row; } }
    internal int Column { get { return col; } }
    internal byte Value { get { return value; } }

    internal Result(int row, int col, byte value)
    {
        this.row = row;
        this.col = col;
        this.value = value;
    }
}

 

UnorderedParallelLinqInPlace

Why bother returning the data at all? We can just write the data in place, like we did earlier with ParallelLinqRowByRowInPlace but in a more aggressive fashion (and the disadvantage of computing the index for each point). Again, we have to return a dummy value and iterate through those dummy results to force all the computations to go through.

 

public override void Generate()
{

    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Note side-effect!
                select Data[row*Width + col] = ComputeMandelbrotIndex(row, col);

    // Force iteration through all results
    query.Max();
}

 

UnorderedParallelLinqInPlaceWithDelegate

UnorderedParallelLinqInPlace feels slightly nasty – we’ve clearly got a side-effect within the loop, it’s just that we know the side-effects are all orthogonal. We can make ourselves feel slightly cleaner by separating the side-effect from the main algorithm, by way of a delegate. The loop can hold its hands up and say, “I’m side-effect free if you are.” It’s not hugely satisfactory, but it’ll be interesting to see the penalty we pay for this attempt to be purer.

 

IEnumerable<T> Generate<T>(Func<int, int, T> transformation)
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Side-effect only if transformation contains one…
                select transformation(row, col);

    return query;
}

public override void Generate()
{
    // Transformation with side-effect
    Func<int,int,byte> transformation = (row, col) => Data[row * Width + col] = ComputeMandelbrotIndex(row, col);

    Generate(transformation).Max();
}

 

UnorderedParallelLinqInPlaceWithGenerator

We can easily combine the earlier generator code with any of these new ways of processing the results. Here we use our “algorithm in the query” approach but process the results as we go to avoid ordering issues.

 

public override void Generate()
{
    var query = from row in Enumerable.Range(0, Height).AsParallel()
                from col in Enumerable.Range(0, Width)
                // Work out the initial complex value from the row and column
                let c = new Complex((col * SampleWidth) / Width + OffsetX,
                                    (row * SampleHeight) / Height + OffsetY)
                // Work out the number of iterations
                let count = GenerateSequence(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                                               .Take(MaxIterations)
                                                               .Count()
                // Map that to an appropriate byte value – and write it in place
                select Data[row * Width + col]  = (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

    // Force execution
    query.Max();
}

 

ParallelForLoop

The Parallel Extensions library doesn’t just contain Parallel LINQ. It also has various other building blocks for parallelism, including a parallel for loop. This allows very simple parallelism of our very first generator – we just turn the outside loop into a parallel for loop, turning the previous inner loop into a delegate. We need to move the index variable into the outer loop so there’ll be one per row (otherwise they’d trample on each other) but that’s about it:

 

public override void Generate()
{
    Parallel.For(0, Height, row =>
    {
        int index = row * Width;
        for (int col = 0; col < Width; col++)
        {
            Data[index++] = ComputeMandelbrotIndex(row, col);
        }
    });
}

 

Results

A benchmark is nothing without results, so here they are on my dual core laptop, from three test runs. The first is the “default” settings I used to develop the benchmark – nothing hugely strenuous, but enough to see differences. I then tried a larger image with the same maximum number of iterations, then the original size with a larger number of iterations. The results are in alphabetical order because that’s how the test prints them :) Times are in milliseconds.

Implementation Width=1200; MaxIterations=200 Width=3000; MaxIterations=200 Width=1200; MaxIterations=800
MultiThreadRowFetching 380 2479 1311
MultiThreadUpFrontSplitImperative 384 2545 2088
ParallelForLoop 376 2361 1292
ParallelLinqRowByRowInPlace 378 2347 1295
ParallelLinqRowByRowWithCopy 382 2376 1288
ParallelLinqWithGenerator 4782 29752 16626
ParallelLinqWithSequenceOfPoints 549 3413 1462
SingleThreadImperativeInline 684 4352 2455
SingleThreadImperativeSimple 704 4353 2372
SingleThreadImperativeWithComplex 2795 16720 9800
SingleThreadLinqSimple 726 4522 2438
SingleThreadLinqWithGenerator 8902 52586 30075
UnorderedParalleLinqInPlace 422 2586 1317
UnorderedParallelLinqInPlaceWithDelegate 509 3093 1392
UnorderedParallelLinqInPlaceWithGenerator 5046 31657 17026
UnorderedParallelLinqSimple 556 3449 1448
UnorderedParalelLinqWithStruct 511 3227 1427

Conclusions

So, what have we learned? Well… bearing in mind that benchmarks like this are often misleading compared with real applications, etc it’s still interesting that:

  • Parallel Extensions rocks. If I were trying to include a Mandelbrot generation implementation in a production setting, I’d definitely go for the parallel for loop – it’s simple, but works just as well as anything else.
  • The micro-optimisation of SingleThreadImperativeInline really doesn’t help much, but makes the code harder to understand – just like so many micro-optimisations.
  • The “generator” form of LINQ really doesn’t perform well at all. It does parallelise pretty well, as you’d expect, but it’s just plain slow.
  • Part of the slowness is almost certainly due to the use of the Complex struct, given the results of SingleThreadImperativeWithComplex. Not sure what’s going on there.
  • The extra abstraction step from UnorderedParallelLinqInPlace to UnorderedParallelLinqInPlaceWithDelegate does have a significant impact, at least when the maximum number of iterations per point isn’t the dominant force. That doesn’t mean it would be a bad idea in production, of course – just somewhere to consider when running against performance issues.

I suspect I’ve missed some notable points though – comments?

Visualising the Mandelbrot set with LINQ – yet again

I’ve been thinking about ranges again, particularly after catching a book error just in time, and looking at Marc’s generic complex type. It struck me that my previous attempts were all very well, and demonstrated parallelisation quite neatly, but weren’t very LINQy. In particular, they certainly didn’t use LINQ for the tricky part in the same way that Luke Hoban’s ray tracer does.

The thing is, working out the “value” of a particular point in the Mandelbrot set visualisation is actually quite well suited to LINQ:

  1. Start with a complex value
  2. Apply a transformation to it (square the current value, add the original value from step 1).
  3. Does the result have a modulus > 2? If so, stop.
  4. Have we performed as many iterations as we’re willing to? If so, stop.
  5. Take our new value, and go back to 2.

We only need two “extra” bits of code to implement this: a Complex type, and a way of applying the same transformation repeatedly.

Well, here’s the Complex type – it contains nothing beyond what we need for this demo, but it’ll do. Obviously Marc’s generic implementation is rather more complete.

public struct Complex
{
    readonly double real;
    readonly double imaginary;

    public Complex(double real, double imaginary)
    {
        this.real = real;
        this.imaginary = imaginary;
    }

    public static Complex operator +(Complex c1, Complex c2)
    {
        return new Complex(c1.real + c2.real, c1.imaginary + c2.imaginary);
    }

    public static Complex operator *(Complex c1, Complex c2)
    {
        return new Complex(c1.real*c2.real – c1.imaginary*c2.imaginary,
                           c1.real*c2.imaginary + c2.real*c1.imaginary);
    }

    public double SquareLength
    {
        get { return real * real + imaginary * imaginary; }
    }
}

Simple stuff, assuming you know anything about complex numbers.

The other new piece of code is even simpler. It’s just a generator. It’s a static method which takes an initial value, and a delegate to apply to one value to get the next. It then lazily returns the generated sequece – forever.

public static IEnumerable<T> Generate<T>(T start, Func<T, T> step)
{
    T value = start;
    while (true)
    {
        yield return value;
        value = step(value);
    }
}

Just as an example of use, remember Enumerable.Range which starts at a particular integer, then adds one repeatedly until it’s yielded as many results as you’ve asked for? Well, here’s a possible implementation, given the Generate method:

public static IEnumerable<int> Range(int start, int count)
{
    return Generate(start, x => x + 1).Take(count);
}

These are all the building blocks we require to build our Mandelbrot visualisation. We want a query which will return a sequence of bytes, one per pixel, where each byte represents the colour to use. Anything which goes beyond our maximum number of iterations ends up black (colour 0) and other values will cycle round the colours. I won’t show the drawing code, but the query is now more self-contained:

var query = from row in Enumerable.Range(0, ImageHeight)
            from col in Enumerable.Range(0, ImageWidth)
            // Work out the initial complex value from the row and column
            let c = new Complex((col * SampleWidth) / ImageWidth + OffsetX,
                                (row * SampleHeight) / ImageHeight + OffsetY)
            // Work out the number of iterations
            select Generate(c, x => x * x + c).TakeWhile(x => x.SquareLength < 4)
                                              .Take(MaxIterations)
                                              .Count() into count
            // Map that to an appropriate byte value
            select (byte)(count == MaxIterations ? 0 : (count % 255) + 1);

(The various constants used in the expression are defined elsewhere.) This works, and puts the Mandelbrot logic directly into the query. However, I have to admit that it’s much slower than my earlier versions. Heck, I’m still proud of it though.

As ever, full source code is available for download, should you so wish.

A cautionary parallel tale: ordering isn’t simple

A little while ago, I wrote about my silly project to test Parallel LINQ – a Mandelbrot generator. In the last week, two things have happened to make this more of a reality. Firstly, the December CTP of Parallel FX has been released. Secondly, my old laptop died, “forcing” me to get a new laptop, which just happens to have a dual core processor.

So, it should just be a case of running it, right? Well, not quite. First let’s have a look at the query expression again, in its serial form:

 

var query = from row in Enumerable.Range(0, ImageHeight)
            from col in Enumerable.Range(0, ImageWidth)                                             
            select ComputeMandelbrotIndex(row, col);

 

And here’s what should be generated.

That’s the result of running without any parallelism, in other words. Now, I realised from the start that we would need ordering, but my first experiment was just to call AsParallel() to see what would happen:

 

var query = from row in Enumerable.Range(0, ImageHeight).AsParallel()
            from col in Enumerable.Range(0, ImageWidth)                                             
            select ComputeMandelbrotIndex(row, col);

 

As expected, that didn’t produce quite the result we wanted:

Well, that’s okay. I wanted to prove that ordering was necessary, and indeed that’s fairly obvious from the result. There are horizontal blocks, returned out of order. Easily fixable, right? After all, I posted what I thought would be the solution with the original post. We just need to give the appropriate option as a method parameter:

 

var query = from row in Enumerable.Range(0, ImageHeight).AsParallel(ParallelQueryOptions.PreserveOrdering)
            from col in Enumerable.Range(0, ImageWidth)                                             
            select ComputeMandelbrotIndex(row, col);

 

Not so fast. It certainly changed things, but not quite as hoped:

I haven’t yet analysed quite why we now have the rows in order but the columns out of order. However, I haven’t quite managed to fix it with the code in its original form. I have managed to fix it by reducing it from two (implicit) loops to one:

 

var query = from row in Enumerable.Range(0, ImageHeight).AsParallel(ParallelQueryOptions.PreserveOrdering)                                             
            select ComputeMandelbrotRow(row);

byte[] data = new byte[ImageHeight * ImageWidth];

int rowStart = 0;
foreach (byte[] row in query)
{
    Array.Copy(row, 0, data, rowStart, ImageWidth);
    rowStart += ImageWidth;
}

 

Instead of getting all the results in one go (with a call to ToArray()) we now have to reassemble all the data into a block. Still, it achieves the desired result. I should point out that PFX has better ways of doing this, Parallel.For being an obvious starting point from the little I know of it. At some point I’ll get round to reading about them, but at the moment life’s too short. I should also say that I don’t expect that any of the pictures indicate a bug in Parallel LINQ, merely my understanding of it.

Update: Explanation and a workaround

Having asked about this on the Connect forum for PFX, Igor Ostrovsky has explained that this is by design – currently only outer ordering is preserved when requested. The issue is still open, however – it’s possible that it will change before release.

In the meantime, Nicholas Palladinos has come up with an alternative solution which I rather like. I’ve refactored it a bit, but the basic idea is to turn the two sequences into one before parallelisation:

 

var points = from row in Enumerable.Range(0, ImageHeight)
             from col in Enumerable.Range(0, ImageWidth)
             select new { row, col };

var query = from point in points.AsParallel(ParallelQueryOptions.PreserveOrdering)                                              
            select ComputeMandelbrotIndex(point.row, point.col);

 

That works really well – in fact, more than twice as fast as the serial version, on my 2-core box!

LINQ to Silliness: Generating a Mandelbrot with parallel potential

I’ve been writing about LINQ recently, and in particular I’ve written a small amount about Parallel LINQ. (Don’t get excited – it’s only about a page, just to mention it as a sort of “meta-provider” for LINQ.) I was wondering what to use to demonstrate it – what general task can we perform which could take a lot of CPU?

Well, I used to be quite into fractals, and I’ve written Mandelbrot set generators in various languages. I hadn’t done it in C# before now, however. Calculating the colour of each pixel is completely independent of all the other pixels – it’s an “embarrassingly parallelizable” task. So, a great task for PLINQ. Here’s the “normal LINQ” code:

 

var query = from row in Enumerable.Range(0, ImageHeight)
from col in Enumerable.Range(0, ImageWidth)
select ComputeMandelbrotIndex(row, col);

byte[] data = query.ToArray();

Changing this into a parallel query is really simple – although we do need to preserve the ordering of the results:

var query = from row in Enumerable.Range(0, ImageHeight).AsParallel(QueryOptions.PreserveOrdering)
from col in Enumerable.Range(0, ImageWidth)
select ComputeMandelbrotIndex(row, col);

byte[] data = query.ToArray();

Without being able to actually use PLINQ yet, I can’t tell how awful the order preservation is – Joe warns that it’s costly, but we’ll see. This is on a pretty giant sequence of data, of course… An alternative would be to parallelize a row at a time, but that loses some of the purity of the solution. This is a very, very silly way of parallelizing the task, but it’s got a certain quirky appeal.

Of course, there’s then the code for ComputeMandelBrotIndex and displaying a bitmap from it – the full code is available for download (it’s a single C# file – just compile and run). Enjoy.

Update!

This blog post has been picked up by Nick Palladinos, who has written his own Parallel LINQ provider (much kudos for that – unfortunately for me the blog is in Greek, which I don’t understand). Apparently on a dual core processor the parallelised version of the Mandelbrot generator is indeed about twice as fast – it works! Unfortunately I can’t tell as my laptop only has a single core… it’s very exciting though :)