Reimplementing LINQ to Objects: Part 33 – Cast and OfType

More design decisions around optimization today, but possibly less controversial ones…

What are they?

Cast and OfType are somewhat unusual LINQ operators. They are extension methods, but they work on the non-generic IEnumerable type instead of the generic IEnumerable<T> type:

public static IEnumerable<TResult> Cast<TResult>(this IEnumerable source)
        
public static IEnumerable<TResult> OfType<TResult>(this IEnumerable source)

It’s worth mentioning what Cast and OfType are used for to start with. There are two main purposes:

  • Using a non-generic collection (such as a DataTable or an ArrayList) within a LINQ query (DataTable has the AsEnumerable extension method too)
  • Changing the type of a generic collection, usually to use a more specific type (e.g. you have  List<Person> but you’re confident they’re all actually Employee instances – or you only want to query against the Employee instances)

I can’t say that I use either operator terribly often, but if you’re starting off from a non-generic collection for whatever reason, these two are your only easy way to get "into" the LINQ world.

Here’s a quick rundown of the behaviour they have in common:

  • The source parameter must not be null, and this is validated eagerly
  • It uses deferred execution: the input sequence is not read until the output sequence is
  • It streams its data – you can use it on arbitrarily-long sequences and the extra memory required will be constant (and small :)

Both operators effectively try to convert each element of the input sequence to the result type (TResult). When they’re successful, the results are equivalent (ignoring optimizations, which I’ll come to later). The operators differ in how they handle elements which aren’t of the result type.

Cast simply tries to cast each element to the result type. If the cast fails, it will throw an InvalidCastException in the normal way. OfType, however, sees whether each element is a value of the result type first – and ignores it if it’s not.

There’s one important case to consider where Cast will successfully return a value and OfType will ignore it: null references (with a nullable return type). In normal code, you can cast a null reference to any nullable type (whether that’s a reference type or a nullable value type). However, if you use the "is" C# operator with a null value, it will always return false. Cast and OfType follow the same rules, basically.

It’s worth noting that (as of .NET 3.5 SP1) Cast and OfType only perform reference and unboxing conversions. They won’t convert a boxed int to a long, or execute user-defined conversions. Basically they follow the same rules as converting from object to a generic type parameter. (That’s very convenient for the implementation!) In the original implementation of .NET 3.5, I believe some other conversions were supported (in particular, I believe that the boxed int to long conversion would have worked). I haven’t even attempted to replicate the pre-SP1 behaviour. You can read more details in Ed Maurer’s blog post from 2008.

There’s one final aspect to discuss: optimization. If "source" already implements IEnumerable<TResult>, the Cast operator just returns the parameter directly, within the original method call. (In other words, this behaviour isn’t deferred.) Basically we know that every cast will succeed, so there’s no harm in returning the input sequence. This means you shouldn’t use Cast as an "isolation" call to protect your original data source, in the same way as we sometimes use Select with an identity projection. See Eric Lippert’s blog post on degenerate queries for more about protecting the original source of a query.

In the LINQ to Objects implementation, OfType never returns the source directly. It always uses an iterator. Most of the time, it’s probably right to do so. Just because something implements IEnumerable<string> doesn’t mean everything within it should be returned by OfType… because some elements may be null. The same is true of an IEnumerable<int?> – but not an IEnumerable<int>. For a non-nullable value type T, if source implements IEnumerable<T> then source.OfType<T>() will always contain the exact same sequence of elements as source. It does no more harm to return source from OfType() here than it does from Cast().

What are we going to test?

There are "obvious" tests for deferred execution and eager argument validation. Beyond that, I effectively have two types of test: ones which focus on whether the call returns the original argument, and ones which test the behaviour of iterating over the results (including whether or not an exception is thrown).

The iteration tests are generally not that interesting – in particular, they’re similar to tests we’ve got everywhere else. The "identity" tests are more interesting, because they show some differences between conversions that are allowed by the CLR and those allowed by C#. It’s obvious that an array of strings is going to be convertible to IEnumerable<string>, but a test like this might give you more pause for thought:

[Test]
public void OriginalSourceReturnedForInt32ArrayToUInt32SequenceConversion()
{
    IEnumerable enums = new int[10];
    Assert.AreSame(enums, enums.Cast<uint>());
}

That’s trying to "cast" an int[] to an IEnumerable<uint>. If you try the same in normal C# code, it will fail – although if you cast it to "object" first (to distract the compiler, as it were) it’s fine at both compile time and execution time:

int[] ints = new int[10];
// Fails with CS0030
IEnumerable<uint> uints = (IEnumerable<uint>) ints;
        
// Succeeds at execution time
IEnumerable<uint> uints = (IEnumerable<uint>)(object) ints;

We can have a bit more fun at the compiler’s expense, and note its arrogance:

int[] ints = new int[10];
        
if (ints is IEnumerable<uint>)
{
    Console.WriteLine("This won’t be printed");
}
if (((object) ints) is IEnumerable<uint>)
{
    Console.WriteLine("This will be printed");
}

This generates a warning for the first block "The given expression is never of the provided (…) type" and the compiler has the cheek to remove the block entirely… despite the fact that it would have worked if only it had been emitted as code.

Now, I’m not really trying to have a dig at the C# team here – the compiler is actually acting entirely reasonably within the rules of C#. It’s just that the CLR has subtly different rules around conversions – so when the compiler makes a prediction about what would happen with a particular cast or "is" test, it can be wrong. I don’t think this has ever bitten me as an issue, but it’s quite fun to watch. As well as this signed/unsigned difference, there are similar conversions between arrays of enums and their underlying types.

There’s another type of conversion which is interesting:

[Test]
public void OriginalSourceReturnedDueToGenericCovariance()
{
    IEnumerable strings = new List<string>();
    Assert.AreSame(strings, strings.Cast<object>());
}

This takes advantage of the generic variance introduced in .NET 4 – sort of. There is now a reference conversion from List<string> to IEnumerable<object> which wouldn’t have worked in .NET 3.5. However, this isn’t due to the fact that C# 4 now knows about variance; the compiler isn’t verifying the conversion here, after all. It isn’t due to a new feature in the CLRv4 – generic variance for interfaces and delegates has been present since generics were introduced in CLRv2. It’s only due to the change in the IEnumerable<T> type, which has become IEnumerable<out T> in .NET 4. If you could make the same change to the standard library used in .NET 3.5, I believe the test above would pass. (It’s possible that the precise CLR rules for variance changed between CLRv2 and CLRv4 – I don’t think this variance was widely used before .NET 4, so the risk of it being a problematically-breaking change would have been slim.)

In addition to all these functional tests, I’ve included a couple of tests to show that the compiler uses Cast in query expressions if you give a range variable an explicit type. This works for both "from" and "join":

[Test]
public void CastWithFrom()
{
    IEnumerable strings = new[] { "first", "second", "third" };
    var query = from string x in strings
                select x;
    query.AssertSequenceEqual("first", "second", "third");
}

[Test]
public void CastWithJoin()
{
    var ints = Enumerable.Range(0, 10);
    IEnumerable strings = new[] { "first", "second", "third" };
    var query = from x in ints
                join string y in strings on x equals y.Length
                select x + ":" + y;
    query.AssertSequenceEqual("5:first", "5:third", "6:second");
}

Note how the compile-time type of "strings" is just IEnumerable in both cases. We couldn’t use this in a query expression normally, because LINQ requires generic sequences – but by giving the range variables explicit types, the compiler has inserted a call to Cast which makes the rest of the translation work.

Let’s implement them!

The "eager argument validation, deferred sequence reading" mode of Cast and OfType means we’ll use the familiar approach of a non-iterator-block public method which finally calls an iterator block if it gets that far. This time, however, the optimization occurs within the public method. Here’s Cast, to start with:

public static IEnumerable<TResult> Cast<TResult>(this IEnumerable source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    IEnumerable<TResult> existingSequence = source as IEnumerable<TResult>;
    if (existingSequence != null)
    {
        return existingSequence;
    }
    return CastImpl<TResult>(source);
}

private static IEnumerable<TResult> CastImpl<TResult>(IEnumerable source)
{
    foreach (object item in source)
    {
        yield return (TResult) item;
    }
}

We’re using the normal as/null-test to check whether we can just return the source directly, and in the loop we’re casting. We could have made the iterator block very slightly shorter here, using the behaviour of foreach to our advantage:

foreach (TResult item in source)
{
    yield return item;
}

Yikes! Where’s the cast gone? How can this possibly work? Well, the cast is still there – it’s just been inserted automatically by the compiler. It’s the invisible cast that was present in almost every foreach loop in C# 1. The fact that it is invisible is the reason I’ve chosen the previous version. The point of the method is to cast each element – so it’s pretty important to make the cast as obvious as possible.

So that’s Cast. Now for OfType. First let’s look at the public entry point:

public static IEnumerable<TResult> OfType<TResult>(this IEnumerable source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (default(TResult) != null)
    {
        IEnumerable<TResult> existingSequence = source as IEnumerable<TResult>;
        if (existingSequence != null)
        {
            return existingSequence;
        }
    }
    return OfTypeImpl<TResult>(source);
}

This is almost the same as Cast, but with the additional test of "default(TResult) != null" before we check whether the input sequence is an IEnumerable<TResult>. That’s a simple way of saying, "Is this a non-nullablle value type." I don’t know for sure, but I’d hope that when the JIT compiler looks at this method, it can completely wipe out the test, either removing the body of the if statement completely for nullable value types and reference types, or just go execute the body unconditionally for non-nullable value types. It really doesn’t matter if JIT doesn’t do this, but one day I may get up the courage to tackle this with cordbg and find out for sure… but not tonight.

Once we’ve decided we’ve got to iterate over the results ourselves, the iterator block method is quite simple:

private static IEnumerable<TResult> OfTypeImpl<TResult>(IEnumerable source)
{
    foreach (object item in source)
    {
        if (item is TResult)
        {
            yield return (TResult) item;
        }
    }
}

Note that we can’t use the "as and check for null" test here, because we don’t know that TResult is a nullable type. I was tempted to try to write two versions of this code – one for reference types and one for value types. (I’ve found before that using "as and check for null" is really slow for nullable value types. That may change, of course.) However, that would be quite tricky and I’m not convinced it would have much impact. I did a quick test yesterday testing whether an "object" was actually a "string", and the is+cast approach seemed just as good. I suspect that may be because string is a sealed class, however… testing for an interface or a non-sealed class may be more expensive. Either way, it would be premature to write a complicated optimization without testing first.

Conclusion

It’s not clear to me why Microsoft optimizes Cast but not OfType. There’s a possibility that I’ve missed a reason why OfType shouldn’t be optimized even for a sequence of non-nullable value type values – if you can think of one, please point it out in the comments. My immediate objection would be that it "reveals" the source of the query… but as we’ve seen, Cast already does that sometimes, so I don’t think that theory holds.

Other than that decision, the rest of the implementation of these operators has been pretty plain sailing. It did give us a quick glimpse into the difference between the conversions that the CLR allows and the ones that the C# specification allows though, and that’s always fun.

Next up – SequenceEqual.

Reimplementing LINQ to Objects: Part 32 – Contains

After the dubious optimizations of ElementAt/ElementAtOrDefault yesterday, we meet an operator which is remarkably good at defying optimization. Sort of. Depending on how you feel it should behave.

What is it?

Contains has two overloads, which only differ by whether or not they take an equality comparer – just like Distinct, Intersect and the like:

public static bool Contains<TSource>(
    this IEnumerable<TSource> source,
    TSource value)

public static bool Contains<TSource>(
    this IEnumerable<TSource> source,
    TSource value,
    IEqualityComparer<TSource> comparer)

The operator simply returns a Boolean indicating whether or not "value" was found in "source". The salient points of its behaviour should be predictable now:

  • It uses immediate execution (as it’s returning a simple value instead of a sequence)
  • The source parameter cannot be null, and is validated immediately
  • The value parameter can be null: it’s valid to search for a null value within a sequence
  • The comparer parameter can be null, which is equivalent to passing in EquailtyComparer<TSource>.Default.
  • The overload without a comparer uses the default equality comparer too.
  • If a match is found, the method returns immediately without reading the rest of the input sequence.
  • There’s a documented optimization for ICollection<T> – but there are significant issues with it…

So far, so good.

What are we going to test?

Aside from argument validation, I have tests for the value being present in the source, and it not being present in the source… for the three options of "no comparer", "null comparer" and "specific comparer".

I then have one final test to validate that we return as soon as we’ve found a match, by giving a query which will blow up when the element after the match is computed.

Frankly none of the tests are earth-shattering, but in the spirit of giving you an idea of what they’re like, here’s one with a custom comparer – we use the same source and value for a "default comparer" test which doesn’t find the value as the case differs:

[Test]
public void MatchWithCustomComparer()
{
    // Default equality comparer is ordinal
    string[] source = { "foo", "bar", "baz" };
    Assert.IsTrue(source.Contains("BAR", StringComparer.OrdinalIgnoreCase));
}

Currently I don’t have a test for the optimization mentioned in the bullet points above, as I believe it’s broken. More later.

Let’s implement it!

To start with, let’s dispense with the overload without a comparer parameter: that just delegates to the other one by specifying EqualityComparer<TSource>.Default. Trivial. (Or so we might think. There’s more to this than meets the eye.)

I’ve got three implementations, but we’ll start with just two of them. Which one you pick would depend on whether you’re happy to use one operator to implement another. If you think that’s okay, it’s really simple:

public static bool Contains<TSource>(
    this IEnumerable<TSource> source,
    TSource value,
    IEqualityComparer<TSource> comparer)
{
    comparer = comparer ?? EqualityComparer<TSource>.Default;
    return source.Any(item => comparer.Equals(value, item));
}

"Any" has exactly the traits we want, including validation of the non-nullity of "source". It’s hardly complicated code if we don’t use Any though:

public static bool Contains<TSource>(
    this IEnumerable<TSource> source,
    TSource value,
    IEqualityComparer<TSource> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    comparer = comparer ?? EqualityComparer<TSource>.Default;
    foreach (TSource item in source)
    {
        if (comparer.Equals(value, item))
        {
            return true;
        }
    }
    return false;
}

Obviously there’s a slight penalty in using Any just because of executing a delegate on each iteration – and the extra memory requirement of building an object to capture the comparer. I haven’t measured the performance impact of this – again, it’s a candidate for benchmarking.

Can’t we optimize? (And why does LINQ to Objects think it can?)

The implementations above are all very well, but they feel ever so simplistic. With ElementAt, we were able to take advantage of the fact that an IList<T> allows us random access by index. Surely we’ve got similar collections which allow us to test for containment cheaply?

Well, yes and no. We’ve got IDictionary<TKey, TValue> which allows you to check for the presence of a particular key – but even it would be hard to test whether the sequence we’re looking at is the key sequence for some IDictionary<TSource, TValue>, and somehow get back to the dictionary.

ICollection<T> has a Contains method, but that doesn’t necessarily do the right thing. This is particularly troubling, as the MSDN documentation for the comparer-less overload has contradictory information:

(Summary)

Determines whether a sequence contains a specified element by using the default equality comparer.

(Remarks)

If the type of source implements ICollection<T>, the Contains method in that implementation is invoked to obtain the result. Otherwise, this method determines whether source contains the specified element.

Enumeration is terminated as soon as a matching element is found.

Elements are compared to the specified value by using the default equality comparer, Default.

Why is this troubling? Well, let’s look at a test:

[Test]
public void SetWithDifferentComparer()
{
    HashSet<string> sourceAsSet = new HashSet<string>(StringComparer.OrdinalIgnoreCase)
        { "foo", "bar", "baz" };
    IEnumerable<string> sourceAsSequence = sourceAsSet;
    Assert.IsTrue(sourceAsSet.Contains("BAR"));
    Assert.IsFalse(sourceAsSequence.Contains("BAR"));
    Assert.IsFalse(sourceAsSequence.Contains("BAR", StringComparer.Ordinal));
}

(This exact code won’t build in the Edulinq project configuration, as that doesn’t have a reference to the System.Core assembly which contains HashSet<T>. I’ve got a hack which allows me to run effectively this code though. See the source for details.)

Now this test looks correct to me: while we’re regarding the set as a set, it should use the set’s comparer and find "BAR" with a case-insensitive match. However, when we use it as a sequence in LINQ, it should obey the rules of Enumerable.Contains – which means that the middle call should use the default equality comparer for string. Under that equality comparer, "BAR" isn’t present.

It doesn’t: the above test fails on that middle call in LINQ to Objects, because HashSet<T> implements ICollection<T>. To fit in with the implementation, the documentation summary should actually be worded as something like:

"Determines whether a sequence contains a specified element by using the default equality comparer if the sequence doesn’t implement ICollection<T>, or whatever equality comparison the collection uses if it does implement ICollection<T>."

Now you may be saying to yourself that this is only like relying on IList<T> to fetch an item by index in a fashion consistent with iterating over with it – but I’d argue that any IList<T> implementation which didn’t do that was simply broken… whereas ICollection<T>.Contains is specifically documented to allow custom comparisons:

Implementations can vary in how they determine equality of objects; for example, List<T> uses Comparer<T>.Default, whereas Dictionary<TKey, TValue> allows the user to specify the IComparer<T> implementation to use for comparing keys.

Let’s leave aside the fact that those "Comparer<T>" and "IComparer<T>" should be "EqualityComparer<T>" and "IEqualityComparer<T>" respectively for the minute, and just note that it’s entirely reasonable for an implementation not to use the default equality comparer. That makes sense – but I believe it also makes sense for source.Contains(value) to be more predictable in terms of the equality comparer it uses.

Now I would certainly agree that having a method call which changes semantics based on whether the compile-time type of the source is IEnumerable<T> or ICollection<T> is undesirable too… but I’m not sure there is any particularly nice solution. The options are:

  • The current LINQ to Objects implementation where the comparer used is hard to predict.
  • The Edulinq implementation where the type’s default comparer is always used… if the compile-time type means that Enumerable.Contains is used in the first place.
  • Remove the comparer-less overload entirely, and force people to specify one. This is lousy for convenience.

Note that you might expect the overload which takes a comparer to work the same way if you pass in null as the comparer – but it doesn’t. That overload never delegates to ICollection<T>.Contains.

So: convenience, predictability, consistency. Pick any two. Isn’t API design fun? This isn’t even thinking about performance, of course…

It’s worth bearing in mind that even the current behaviour which is presumably meant to encourage consistency doesn’t work. One might expect that the following would always be equivalent for any sensible collection:

var first = source.Contains(value);
var second = source.Select(x => x).Contains(value);

… but of course the second line will always use EqualityComparer<T>.Default whereas the first may or may not.

(Just for fun, think about Dictionary<TKey, TValue> which implements ICollection<KeyValuePair<TKey, TValue>>; its explicitly-implemented ICollection<T>.Contains method will use its own equality comparer for the key, but the default equality comparer for the value part of the pair. Yay!)

So can we really not optimize?

I can think of exactly one situation which we could legitimately optimize without making the behaviour hard to predict. Basically we’re fine to ask the collection to do our work for us if we can guarantee it will use the right comparer. Ironically, List<T>.Contains has an overload which allows us to specify the equality comparer, so we could delegate to that – but it’s not going to be significantly faster than doing it ourselves. It’s still got to look through everything.

ISet<T> in .NET 4 doesn’t help us much – its API doesn’t talk about equality comparers. (This makes a certain amount of sense – consider SortedSet<T> which uses IComparer<T> instead of IEqualityComparer<T>. It wouldn’t make sense to ask a SortedSet<T> for an equality comparer – it couldn’t give you one, as it wouldn’t know how to produce a hash code.)

However, HashSet<T> does give us something to work with. You can ask a HashSet<T> which equality comparer it uses, so we could delegate to its implementation if and only if it would use the one we’re interested in. We can bolt that into our existing implementation pretty easily, after we’ve worked out the comparer to use:

HashSet<TSource> hashSet = source as HashSet<TSource>;
if (hashSet != null && comparer.Equals(hashSet.Comparer))
{
    return hashSet.Contains(value);
}

So is this worth including or not?

Pros:

  • It covers one of the biggest use cases for optimizing Contains; I suspect this is used more often than the LINQ implementation of Contains working over a dictionary.
  • So long as the comparer doesn’t override Equals in a bizarre way, it should be a true optimization with no difference in behaviour.
  • The optimization is applied for both overloads of Enumerable.Contains, not just the comparer-less one.

Cons:

  • It’s specific to HashSet<T> rather than an interface type. That makes it feel a little too specific to be a good target of optimization.
  • We’ve still got the issue of consistency in terms of sourceAsSet.Contains(value) vs sourceAsSequence.Contains(value)
  • There’s a tiny bit of overhead if the source isn’t a hash set, and a further overhead if it is a hash set but with the wrong comparer. I’m not too bothered about this.

It’s not the default implementation in Edulinq at the moment, but I could possibly be persuaded to include it. Likewise I have a conditionally-compiled version of Contains which is compatible with LINQ to Objects, with the "broken" optimization for the comparer-less overload; this is turned off by default too.

Conclusion

Gosh! I hadn’t expected Contains to be nearly this interesting. I’d worked out that optimization would be a pain, but I hadn’t expected it to be such a weird design choice.

This is the first time I’ve deliberately gone against the LINQ to Objects behaviour, other than the MS bug around descending orderings using "extreme" comparers. The option for compatibility is there, but I feel fairly strongly that this was a bad design decision on Microsoft’s part. A bad decision out of some fairly unpleasant alternatives, I grant you. I’m willing to be persuaded of its virtues, of course – and in particular I’d welcome discussion with the LINQ team around this. In particular, it’s always fun to hear about the history of design decisions.

Next up, Cast and OfType.

Reimplementing LINQ to Objects: Part 31 – ElementAt / ElementAtOrDefault

A nice easy pair of operators tonight. I should possibly have covered them at the same time as First/Last/Single and the OrDefault variants, but never mind…

What are they?

ElementAt and ElementAtOrDefault have a single overload each:

public static TSource ElementAt<TSource>(
    this IEnumerable<TSource> source,
    int index)

public static TSource ElementAtOrDefault<TSource>(
    this IEnumerable<TSource> source,
    int index)

Isn’t that blissfully simple after the overload storm of the past few days?

The two operators work in very similar ways:

  • They use immediate execution.
  • The source parameter must not be null, and this is validated immediately.
  • They return the element at the specified zero-based index, if it’s in the range 0 <= index < count.

The methods only differ in their handling of an index which falls outside the given bound. ElementAt will throw an ArgumentOutOfRangeException; ElementAtOrDefault will return the default value for TSource (e.g. 0, null, false). This is true even if index is negative. You might have expected some way to specify the default value to return if the index is out of bounds, but there isn’t one. (This is consistent with FirstOrDefault() and so on, but not with Nullable<T>.GetValueOrDefault())

This behaviour leaves us some room for common code – for once I haven’t used cut and paste for the implementation. Anyway, I’m getting ahead of myself.

What are we going to test?

As you can imagine, my tests for the two operators are identical except for the expected result in the case of the index being out of range. I’ve tested the following cases:

  • Null source
  • A negative index
  • An index which is too big on a NonEnumerableCollection
  • An index which is too big on a NonEnumerableList
  • An index which is too big on a lazy sequence (using Enumerable.Range)
  • A valid index in a NonEnumerableList
  • A valid index in a lazy sequence

The "non-enumerable" list and collection are to test that the optimizations we’re going to perform are working. In fact, the NonEnumerableCollection test fails on LINQ to Objects – it’s only optimized for IList<T>. You’ll see what I mean in a minute… and why that might not be a bad thing.

None of the tests are very interesting, to be honest.

Let’s implement it!

As I mentioned earlier, I’ve used some common code for once (although I admit the first implementation used cut and paste). As the only difference between the two methods is the handling of a particular kind of failure, I’ve used the TryXXX pattern which exists elsewhere in the framework. There’s a common method which tries to retrieve the right element as an out parameter, and indicates whether or not it succeeded via the return value. Not every kind of failure is just returned, of course – we want to throw an ArgumentNullException if source is null in either case.

That leaves our public methods looking quite straightforward:

public static TSource ElementAt<TSource>(
    this IEnumerable<TSource> source,
    int index)
{
    TSource ret;
    if (!TryElementAt(source, index, out ret))
    {
        throw new ArgumentOutOfRangeException("index");
    }
    return ret;
}

public static TSource ElementAtOrDefault<TSource>(
    this IEnumerable<TSource> source,
    int index)
{
    TSource ret;
    // We don’t care about the return value – ret will be default(TSource) if it’s false
    TryElementAt(source, index, out ret);
    return ret;
}

TryElementAt will only return false if the index is out of bounds, so the exception is always appropriate. However, there is a disadvantage to this approach: we can’t easily indicate in the exception message whether index was too large or negative. We could have specified a message which included the value of index itself, of course. I think it’s a minor matter either way, to be honest.

The main body of the code is in TryElementAt, obviously. It would actually be very simple – just looping and counting up to index, checking as we went – except there are two potential optimizations.

The most obvious – and most profitable – optimization is if the collection implements IList<T>. If it does, we can efficiently obtain the count using the ICollection<T>.Count property (don’t forget that IList<T> extends ICollection<T>), check that it’s not too big, and then use the indexer from IList<T> to get straight to the right element. Brilliant! That’s a clear win.

The less clear optimization is if the collection implements ICollection<T> but not IList<T>, or if it only implements the nongeneric ICollection. In those cases we can still get at the count – but we can’t then get directly to the right element. In other words, we can optimize the failure case (possibly hugely), but at a very slight cost – the cost of checking whether the sequence implements either interface – for the success case, where the check won’t do us any good.

This is the sort of optimization which is impossible to judge without real data. How often are these operators called with an invalid index? How often does that happen on a collection which implements ICollection<T> but not IList<T> (or implements ICollection)? How large are those collections (so how long would it take to have found our error the normal way)? What’s the cost of performing the type check? I don’t have the answers to any of these questions. I don’t even have strong suspicions. I know that Microsoft doesn’t use the same optimization, but I don’t know whether that was due to hard data or a gut feeling.

For the moment, I’ve kept all the optimizations. Here’s the code:

private static bool TryElementAt<TSource>(
    IEnumerable<TSource> source,
    int index,
    out TSource element)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    element = default(TSource);
    if (index < 0)
    {
        return false;
    }
    ICollection<TSource> collection = source as ICollection<TSource>;
    if (collection != null)
    {
        int count = collection.Count;
        if (index >= count)
        {
            return false;
        }
        // If it’s a list, we know we’re okay now – just return directly…
        IList<TSource> list = source as IList<TSource>;
        if (list != null)
        {
            element = list[index];
            return true;
        }
    }

    ICollection nonGenericCollection = source as ICollection;
    if (nonGenericCollection != null)
    {
        int count = nonGenericCollection.Count;
        if (index >= count)
        {
            return false;
        }
    }
    // We don’t need to fetch the current value each time – get to the right
    // place first.
    using (IEnumerator<TSource> iterator = source.GetEnumerator())
    {
        // Note use of -1 so that we start off my moving onto element 0.
        // Don’t want to use i <= index in case index == int.MaxValue!
        for (int i = -1; i < index; i++)
        {
            if (!iterator.MoveNext())
            {
                return false;
            }
        }
        element = iterator.Current;
        return true;
    }
}

As you can see, the optimized cases actually form the bulk of the code – part of me thinks it would be worth removing the non-IList<T> optimizations just for clarity and brevity.

It’s worth looking at the "slow" case where we actually iterate. The for loop looks odd, until you think that to get at element 0, you have to call MoveNext() once. We don’t want to just add one to index or use a less-than-or-equal condition: both of those would fail in the case where index is int.MaxValue; we’d either not loop at all (by incrementing index and it overflowing either causing an exception or becoming negative) or we’d loop forever, as every int is less than or equal to int.MaxValue.

Another way to look at it is that the loop counter ("i") is the "current index" within the iterator: the iterator starts before the first element, so it’s reasonable to start at -1.

The reason I’m drawing attention to this is that I got all of this wrong first time… and was very grateful for unit tests to catch me out.

Conclusion

For me, the most interesting part of ElementAt is the decision about optimization. I’m sure I’m not the only one who optimizes without data at times – but it’s a dangerous thing to do. The problem is that this isn’t the normal micro-optimization quandary of "it’s always a tiny bit better, but it’s probably insignificant and makes the code harder to read". For the cases where this is faster, it could make an enormous difference – asking for element one million of a linked list which doesn’t quite have enough elements could be very painful. But do failure cases need to be fast? How common are they? As you can tell, I’m dithering. I think it’s at least worth thinking about what optimizations might make a difference – even if we later remove them.

Next time, I think I’ll tackle Contains – an operator which you might expect to be really fast on a HashSet<T>, but which has some interesting problems of its own…

Reimplementing LINQ to Objects: Part 30 – Average

This is the final aggregation operator, after which I suspect we won’t need to worry about floating point difficulties any more. Between this and the unexpected behaviour of Comparer<string>.Default, I’ve covered two of my "big three" pain points. It’s hard to see how I could get dates and times into Edulinq naturally; it’s even harder to see how time zones could cause problems. I’ve still got a few operators to go though, so you never know…

What is it?

Average has 20 overloads, all like the following but for long, decimal, float and double as well as int:

public static double Average(this IEnumerable<int> source)

public static double Average<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int> selector)

public static double? Average(this IEnumerable<int?> source)

public static double? Average<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int?> selector)

The operators acting on float sequences return float, and likewise the operators acting on decimal sequences return decimal, with the same equivalent nullable types for the nullable sequences.

As before (for Min/Max/Sum), the overloads which take a selector are equivalent to just applying that selector to each element in the sequence.

General behaviour – pretty much as you’d expect, I suspect:

  • Each operator calculates the arithmetic mean of a sequence of values.
  • source and selector can’t be null, and are validated immediately.
  • The operators all use immediate execution.
  • The operators all iterate over the entire input sequence, unless an exception is thrown (e.g. due to overflow).
  • The operators with a non-nullable return type throw InvalidOperationException if the input sequence is empty.
  • The operators with a nullable return type ignore any null input values, and return null if the input sequence is empty or contains only null values. If non-null values are present, the return value will be non-null.

It all sounds pretty simple, doesn’t it? We just sum the numbers, and divide by the count. It’s not too complicated, but we have a couple of things to consider:

  • How should we count items – which data type should we use? Do we need to cope with more than int.MaxValue elements?
  • How should we sum items? Should we be able to find the average of { int.MaxValue, int.MaxValue } even though the sum clearly overflows the bounds of int?

Given the behaviour of my tests, I believe I’ve made the same decisions. I use a long for the counter, always. I use a long total for the int/long overloads, a double total for the float/double overloads, and a decimal total for the decimal overloads. These aren’t particularly tricky decisions once you’d realised that you need to make them, but it would be very easy to implement the operators in a simplistic way without thinking about such things. (I’d probably have done so if it weren’t for the comments around Sum this morning.)

What are we going to test?

I’ve only got in-depth tests for the int overloads, covering:

  • Argument validation
  • Empty sequences for nullable and non-nullable types
  • Sequences with only null values
  • Sequences of perfectly normal values :)
  • Projections for all the above
  • A sequence with just over int.MaxValue elements, to test we can count properly

Then I have a few extra tests for interesting situations. First I check the overflow behaviour of each type, using a common pattern of averaging a sequence of (max, max, -max, -max) where "max" is the maximum value for the sequence type. The results are:

  • For int we get the correct result of 0 because we’re accumulating over longs
  • For long we get an OverflowException when it tries to add the first two values together
  • For float we get the correct result of 0 because we’re accumulating over doubles
  • For double we get PositiveInfinity because that’s the result of the first addition
  • For decimal we get an OverflowException when it tries to add the first two values together

Additionally, I have a couple of floating-point-specific tests: namely further proof that we use a double accumulator when averaging floats, and the behaviour of Average in the presence of NaN values:

[Test]
public void SingleUsesDoubleAccumulator()
{
    // All the values in the array are exactly representable as floats,
    // as is the correct average… but intermediate totals aren’t.
    float[] array = { 20000000f, 1f, 1f, 2f };
    Assert.AreEqual(5000001f, array.Average());
}

[Test]
public void SequenceContainingNan()
{
    double[] array = { 1, 2, 3, double.NaN, 4, 5, 6 };
    Assert.IsNaN(array.Average());
}

I’m sure someone can think of some other interesting scenarios I should be considering :)

Let’s implement it!

This is another cut-and-paste job, but with more editing required – for each method, I needed to make sure I was using the right accumulator type, and I occasionally removed redundant casts. Still, the code follows pretty much the same pattern for all types. Here’s the int implementation:

public static double Average(this IEnumerable<int> source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    checked
    {
        long count = 0;
        long total = 0;
        foreach (int item in source)
        {
            total += item;
            count++;
        }
        if (count == 0)
        {
            throw new InvalidOperationException("Sequence was empty");
        }
        return (double)total / (double)count;
    }
}

public static double Average<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int> selector)
{
    return source.Select(selector).Average();
}

public static double? Average(this IEnumerable<int?> source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    checked
    {
        long count = 0;
        long total = 0;
        foreach (int? item in source)
        {
            if (item != null)
            {
                count++;
                total += item.Value;
            }
        }
        return count == 0 ? (double?)null : (double)total / (double)count;
    }
}

public static double? Average<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int?> selector)
{
    return source.Select(selector).Average();
}

Salient points:

  • Again I’m using Select to make the implementation of the overloads with selectors trivial
  • I’ve cast both operands of the division when calculating the average, just for clarity. We could get away with either of them.
  • In the case of the conditional operator, I could actually just cast one of the division operators to "double?" and then remove both of the other casts… again, I feel this version is clearer. (I could change my mind tomorrow, mind you…)
  • I’ve explicitly used checked blocks for int and long. For float and double we won’t get overflow anyway, and for decimal the checked/unchecked context is irrelevant.

There’s one optimization we can perform here. Consider this loop, for the nullable sequence:

long count = 0;
long total = 0;
foreach (int? item in source)
{
    if (item != null)
    {
        count++;
        total += item.Value; // This line can be optimized…
    }
}

The line I’ve highlighted seems perfectly reasonable, right? We’re trying to add the "real" non-null value within a value type value, and we know that there is a real value, because we’ve checked it’s not the null value already.

Now think about what the Value property actually does… it checks whether or not it’s the null value, and then returns the real value or throws an exception. But we know it won’t throw an exception, because we’ve checked it. We just want to get at the value – don’t bother with any more checks. That’s exactly what GetValueOrDefault() does. In the case where the value is non-null, GetValueOrDefault() and the Value property obviously do the same thing – but intuition tells me that GetValueOrDefault() can do it quicker, because it doesn’t actually need to check anything. It can just return the value of the underlying field – which will be the default value of the underlying type for a null wrapper value anyway.

I’ve benchmarked this, and on my laptop it’s about 5% faster than using Value. But… it’s such a grotty hack. I would feel dirty putting it in. Surely Value is the more readable code here – it just happens to be slower. As always, I’m undecided. There’s no behavioural difference, just a slight speed boost. Thoughts, folks?

Conclusion

I’m quite pleased to be shot of the Aggregate Operators Of Overload Doom. I’ve felt for a while that they’ve been hanging over me – I knew they’d be annoying in terms of cut and paste, but there’s been more interesting situations to consider than I’d expected.

There’s not a lot left now. According to my previous list, I’ve got:

  • Cast and OfType
  • ElementAt and ElementAtOrDefault
  • SequenceEqual
  • Zip (from .NET 4)
  • Contains

However, that doesn’t include AsEnumerable and AsQueryable. I’m unsure at the moment what I’m doing with those… AsEnumerable is trivial, and probably worth doing… AsQueryable could prove interesting in terms of testing, as it requires expression trees (which are in System.Core; a library I’m not referencing from tests when testing the Edulinq implementation). I’ll play around and see what happens :)

Not sure what I’ll implement next, to be honest… we’ll see tomorrow!

Reimplementing LINQ to Objects: Part 29 – Min/Max

The second and third AOOOD operators today… if I’m brave enough to tackle Average tomorrow, I’ll have done them all. More surprises here today, this time in terms of documentation…

What are they?

Min and Max are both extension methods with 22 overloads each. Min looks like this:

public static int Min(this IEnumerable<int> source)

public static int Min<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int> selector)

public static int? Min(this IEnumerable<int?> source)

public static int? Min<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int?> selector)

// Repeat the above four overloads for long, float, double and decimal,
// then add two more generic ones:

public static TSource Min<TSource>(this IEnumerable<TSource> source)

public static TResult Min<TSource, TResult>(
    this IEnumerable<TSource> source,
    Func<TSource, TResult> selector
)

(Max is exactly the same as Min; just replace the name.)

The more obvious aspects of the behaviour are as follows:

  • source and selector mustn’t be null
  • All overloads use immediate execution
  • The minimum or maximum value within the sequence is returned
  • If a selector is present, it is applied to each value within source, and the maximum of the projected values is returned. (Note how the return type of these methods is TResult, not TSource.)

Some less obvious aspects – in all cases referring to the result type (as the source type is somewhat incidental when a selector is present; it doesn’t affect the behaviour):

  • The type’s IComparable<T> implementation is used when available, otherwise IComparable is used. An ArgumentException is thrown if values can’t be compared. Fortunately, this is exactly the behaviour of Comparer<T>.Default.
  • For any nullable type (whether it’s a reference type or a nullable value type), nulls within the sequence are ignored, and an empty sequence (or one which contains only null values) will cause a null value to be returned. If there are any non-null values in the sequence, the return value will be non-null. (Note that this is different from Sum, which will return the non-null zero value for empty sequences over nullable types.)
  • For any non-nullable value type, an empty sequence will cause InvalidOperationException to be thrown.

The first point is particularly interesting when you consider the double and float types, and their "NaN" (not-a-number) values. For example, Math.Max regards NaN as greater than positive infinity, but Enumerable.Max regards positive infinity as being the greater of the two. Math.Min and Enumerable.Min agree, however, that NaN is less than negative infinity. (It would actually make sense to me for NaN to be treated as the numeric equivalent of null here, but that would be strange in other ways…) Basically, NaN behaves oddly in all kinds of ways. I believe that IEEE-754-2008 actually specifies behaviour with NaNs which encourages the results we’re getting here, but I haven’t verified that yet. (I can’t find a free version of the standard online, which is troubling in itself. Ah well.)

The behaviour of the nullable and non-nullable types is well documented for the type-specific overloads using int, Nullable<int> etc. However, the generic overloads (the ones using TSource) are poorly documented:

  • InvalidOperationException isn’t in the list of possibly-thrown arguments for any of the overloads
  • The methods using selectors from TSource to TResult don’t mention the possibility of nullity at all
  • The methods without selectors describe the behaviour of null values for reference types, but don’t mention the possibility of empty sequences for non-nullable value types, or consider nullable value types at all.

(I should point out that ArgumentException isn’t actually mentioned either for the case where values are incomparable, but that feels like a slightly less important offence for some reason. Possibly just because it didn’t trip me up.)

If I remember, I’ll open a Connect issue against this hole in the documentation when I find time. Unlike the optimizations and set ordering (where it’s reasonably forgivable to deliberately omit implementation details from the contract) you simply can’t predict the behaviour in a useful way from the documentation here. And yes, I’m going on about this because it bit me. I had to resort to writing tests and running them against LINQ to Objects to see if they were correct or not. (They were incorrect in various places.)

If you look at the behaviour of the non-generic methods, the generic ones are entirely consistent of course.

There are a couple of things which you might consider "missing" in terms of Max and Min:

  • The ability to find out the minimum/maximum value of a sequence by a projection. For example, consider a sequence of people. We may wish to find the youngest person in the sequence, in which case we’d like to be able to write something like:
    var oldest = people.MaxBy(person => person.Age);

    We can find the maximum age itself easily enough – but then we’d need a second pass to find the first person with that age. I’ve addressed this in MoreLINQ with the MaxBy and MinBy operators. The System.Interactive assembly in Reactive Extensions has the same methods too.

  • The ability to specify a custom IComparer<T> implementation, as we can in most of the operators using IEqualityComparer<T>. For example, we can’t find the "maximum" string in a sequence, using a case-insensitive ordinal comparison.

Still, at least that means there’s less to test…

What are we going to test?

I decided I really couldn’t find the energy to replicate all the tests for every type involved here. Instead, I have a bunch of tests for int and Nullable<int>, a few tests exploring the oddness of doubles, and a bunch of tests around the generic methods. In particular, I know that I’ve implemented decimal, float etc by calling the same methods that the int overloads use.

The tests cover:

  • Argument validation
  • Empty sequences
  • Sequences of null values where applicable
  • Projections of the above
  • Generic tests for nullable and non-nullable value types, and reference types (with empty sequences etc)
  • Incomparable values

Let’s implement them!

Okay, let’s start off with the simplest detail: the order of implementation:

  • Max(int)
  • Max(generic)
  • Cut and paste Max implementations for other numeric types (replace the type name, basically)
  • Cut and paste the entirety of Max to Min:
    • Replace "Max" with "Min" everywhere
    • Replace " < " with " > " everywhere (only 4 occurrences; basically the results of calling Compare or ComparerTo and comparing with 0)

Just as with Sum, I could have used templating – but I don’t think it would actually have saved me significant time.

This time, I thought I’d use Select internally for the overloads with selectors (unlike my approach for Sum which used identity projections). There’s no particular reason for this – I just thought it would be interesting to try both approaches. Overall, I think I prefer this one, but I haven’t done any benchmarking to find out the relative performance penalties.

Each set of numeric overloads calls into a single pair of generic "implementation" methods. These aren’t the public general-purpose ones: they require that the types in use implement IComparable<T>, and I’ve added a "struct" constraint just for kicks. This is just one approach. Other options:

  • I could have implemented the code separately for each numeric type. That may well be faster than calling IComparable<T>.Compare (at least for most types) as the IL would have contained the appropriate operator directly. However, it would have meant more code and explicitly dealing with the headache of NaNs for double/float. If I ever write benchmarks, I’ll investigate the difference that this can make.
  • I could have used the public generic overloads, which eventually call into Comparer<T>.Default. Again, the penalty for this (if any) is unknown to me at this point. Can the JIT inline deeply enough to make this as fast as a "native" implementation? I wouldn’t like to guess without tests.

I’ve separated out the nullable implementations from the non-nullable ones, as the behaviour differs significantly between the two.

Here’s the public code for int:

public static int Max(this IEnumerable<int> source)
{
    return PrimitiveMax(source);
}

public static int Max<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int> selector)
{
    // Select will validate the arguments
    return PrimitiveMax(source.Select(selector));
}

public static int? Max(this IEnumerable<int?> source)
{
    return NullablePrimitiveMax(source);
}

public static int? Max<TSource>(
    this IEnumerable<TSource> source,
    Func<TSource, int?> selector)
{
    // Select will validate the arguments
    return NullablePrimitiveMax(source.Select(selector));
}

All the methods consider argument validation to be somebody else’s problem – either Select or the generic method we’re calling to find the maximum value. Part of me thinks this is lazy; part of me likes it in terms of not repeating code. All of me would prefer the ability to specify non-nullable parameters declaratively…

Here are the "primitive" methods called into above:

// These are uses by all the overloads which use a known numeric type.
// The term "primitive" isn’t truly accurate here as decimal is not a primitive
// type, but it captures the aim reasonably well.
// The constraint of being a value type isn’t really required, because we don’t rely on
// it within the method and only code which already knows it’s a comparable value type
// will call these methods anyway.
        
private static T PrimitiveMax<T>(IEnumerable<T> source) where T : struct, IComparable<T>
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    using (IEnumerator<T> iterator = source.GetEnumerator())
    {
        if (!iterator.MoveNext())
        {
            throw new InvalidOperationException("Sequence was empty");
        }
        T max = iterator.Current;
        while (iterator.MoveNext())
        {
            T item = iterator.Current;
            if (max.CompareTo(item) < 0)
            {
                max = item;
            }
        }
        return max;
    }
}

private static T? NullablePrimitiveMax<T>(IEnumerable<T?> source) where T : struct, IComparable<T>
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    T? max = null;
    foreach (T? item in source)
    {
        if (item != null &&
            (max == null || max.Value.CompareTo(item.Value) < 0))
        {
            max = item;
        }
    }
    return max;
}

The first method is interesting in terms of its approach to throwing an exception if the first element isn’t present, and using that as an initial candidate otherwise.

The second method needs to consider nullity twice on each iteration:

  • Is the item from the sequence null? If so, we can ignore it.
  • Is our "current maximum" null? If so, we can replace it with the item from the sequence without performing a comparison.

Now there’s one case which is ambiguous here: when both values are null. At that point we can choose to replace our "current maximum" with the item, or not… it doesn’t matter as the values are the same anyway. It is important that we don’t try to perform a comparison unless both values are non-null though… the short-circuiting && and || operators keep us safe here.

Having implemented the code above, all the interesting work lies in the generic forms. Here we don’t have different public methods to determine which kind of behaviour we’ll use: but I wrote two private methods instead, and just delegated to the right one from the public one. This seemed cleaner than putting the code all in one method:

public static TSource Max<TSource>(
    this IEnumerable<TSource> source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    // This condition will be true for reference types and nullable value types, and false for
    // non-nullable value types.
    return default(TSource) == null ? NullableGenericMax(source) : NonNullableGenericMax(source);
}

public static TResult Max<TSource, TResult>(
    this IEnumerable<TSource> source,
    Func<TSource, TResult> selector)
{
    return Max(source.Select(selector));
}

/// <summary>
/// Implements the generic behaviour for non-nullable value types.
/// </summary>
/// <remarks>
/// Empty sequences will cause an InvalidOperationException to be thrown.
/// Note that there’s no *compile-time* validation in the caller that the type
/// is a non-nullable value type, hence the lack of a constraint on T.
/// </remarks>
private static T NonNullableGenericMax<T>(IEnumerable<T> source)
{
    Comparer<T> comparer = Comparer<T>.Default;

    using (IEnumerator<T> iterator = source.GetEnumerator())
    {
        if (!iterator.MoveNext())
        {
            throw new InvalidOperationException("Sequence was empty");
        }
        T max = iterator.Current;
        while (iterator.MoveNext())
        {
            T item = iterator.Current;
            if (comparer.Compare(max, item) < 0)
            {
                max = item;
            }
        }
        return max;
    }
}

/// <summary>
/// Implements the generic behaviour for nullable types – both reference types and nullable
/// value types.
/// </summary>
/// <remarks>
/// Empty sequences and sequences comprising only of null values will cause the null value
/// to be returned. Any sequence containing non-null values will return a non-null value.
/// </remarks>
private static T NullableGenericMax<T>(IEnumerable<T> source)
{
    Comparer<T> comparer = Comparer<T>.Default;

    T max = default(T);
    foreach (T item in source)
    {
        if (item != null &&
            (max == null || comparer.Compare(max, item) < 0))
        {
            max = item;
        }
    }
    return max;
}

As you can tell, there’s a significant similarity between the "PrimitiveMax" and "NonNullableGenericMax" methods, and likewise between "NullablePrimitiveMax" and "NullableGenericMax". This should come as no surprise. Fundamentally the difference is just between using an IComparable<T> implementation, and using Comparer<T>.Default. (The argument validation occurs in a different place too, as we’ll be going through a public entry point for the non-primitive code.)

Once I’d discovered the correct behaviour, this was reasonably simple. Of course, the above code wasn’t my first implementation, where I’d completely forgotten about null values, and hadn’t thought about how the nullability of the source type might affect the behaviour of empty sequences…

Conclusion

If you’re ever in a company which rewards you for checking in lots of lines of code, offer to implement Sum/Min/Max. This weekend I’ve checked in about 2,500 lines of code in (split between production and test) and none of it’s been terribly hard. Of course, if you’re ever in such a company you should also consider looking for another job. (Have I mentioned that Google’s hiring? Email me if you’re interested. I’m serious.)

As you can tell, I was slightly irritated by the lack of clarity around the documentation in some places – but I find it interesting that even a simple-sounding function like "find the maximum value from a sequence" should need the kind of documentation that’s missing here. I’m not saying it’s a failure of the design – more just musing how a complete specification is almost always going to be longer than you might think at first glance. And if you think I was diligent here, think again: I didn’t bother specifying which maximum or minimum value would be returned if there were two. For example, if a sequence consists of references to two equal but distinct strings, which reference should be returned? I have neither stated what my implementation (or the LINQ to Objects implementation) will do, nor tested for it.

Next up is Average – a single method with a mere 20 overloads. There are various corner cases to consider… but that’s a post for another day.

Reimplementing LINQ to Objects: Part 28 – Sum

Okay, I’ve bitten the bullet. The first of the four Aggregation Operators Of Overload Doom (AOOOD) that I’ve implemented is Sum. It was far from difficult to implement – just tedious.

What is it?

Sum has 20 overloads – a set of 4 for each of the types that it covers (int, long, float, double, decimal). Here are the overloads for int:

public static int Sum(this IEnumerable<int> source)

public static int? Sum(this IEnumerable<int?> source)

public static int Sum<T>(
    this IEnumerable<T> source,
    Func<T, int> selector)

public static int? Sum<T>(
    this IEnumerable<T> source,
    Func<T, int?> selector)

As you can see, there are basically two variations:

  • A source of the numeric type itself, or a source of an arbitrary type with a projection to the numeric type
  • The numeric type can be nullable or non-nullable

The behaviour is as follows:

  • All overloads use immediate execution: it will immediately iterate over the source sequence to compute the sum, which is obviously the return value.
  • source and selector must both be non-null
  • Where there’s a selector, the operator is equivalent to source.Select(selector).Sum() – or you can think of the versions without a selector as using an identity selector
  • Where the numeric type is nullable, null values are ignored
  • The sum of an empty sequence is 0 (even for nullable numeric types)

The last point is interesting – because the overloads with nullable numeric types never return a null value. Initially I missed the fact that the return type even was nullable. I think it’s somewhat misleading to be nullable but never null – you might have at least expected that the return value for an empty sequence (or one consisting only of null values) would be null.

For int, long and decimal, overflow within the sum will throw OverflowException. For single and double, the result will be positive or negative infinity. If the sequence contains a "not-a-number" value (NaN), the result will be NaN too.

What are we going to test?

A lot!

In total, I have 123 tests across the five types. The tests are mostly the same for each type, with the exception of overflow behaviour and not-a-number behaviour for single and double. Each overload is tested reasonably thoroughly:

  • Argument validation
  • Summing a simple sequence
  • Summing an empty sequence
  • (Nullable) Summing a simple sequence containing null values
  • (Nullable) Summing a sequence containing only null values
  • Positive overflow (either to an exception or infinity)
  • Negative overflow (only one test per type, rather than for each overload)
  • (Single/Double) Sequences containing NaN values
  • Projections resulting in all of the above

Most of this was done using cut and paste, leading to a 916-line source file. On Twitter, followers have suggested a couple of alternatives – templating (possibly for both the tests and the implementation), or using more advanced features of unit test frameworks. There’s nothing wrong with these suggestions – but I’m always concerned about the balance within an elegant-but-complex solution to repetition. If it takes longer to get to the "neat" solution, and then each individual test is harder to read, is it worth it? It certainly makes it easier to add one test which is then applicable over several types, or to modify all "copies" of an existing test – but equally it makes it harder to make variations (such as overflow) fit within the pattern. I have no quarrel with the idea of using more advanced techniques here, but I’ve stuck to a primitive approach for the moment.

Let’s implement it!

Again, I’ll only demonstrate the "int" implementations – but talk about single/double later on. There are plenty of ways I could have implemented this:

  • Delegate everything to the simplest overload using "Where" for the nullable sequences and "Select" for the projection sequences
  • Delegate everything to the most complex overload using identity projections
  • Implement each method independently
  • Somewhere in-between :)

In the end I’ve implemented each non-projecting overload by delegating to the corresponding projection-based one with an identity projection. I’ve implemented the non-nullable and nullable versions separately though. Here’s the complete implementation:

public static int Sum(this IEnumerable<int> source)
{
    return Sum(source, x => x);
}

public static int? Sum(this IEnumerable<int?> source)
{
    return Sum(source, x => x);
}

public static int Sum<T>(
    this IEnumerable<T> source,
    Func<T, int> selector)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (selector == null)
    {
        throw new ArgumentNullException("selector");
    }
    checked
    {
        int sum = 0;
        foreach (T item in source)
        {
            sum += selector(item);
        }
        return sum;
    }
}

public static int? Sum<T>(
    this IEnumerable<T> source,
    Func<T, int?> selector)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (selector == null)
    {
        throw new ArgumentNullException("selector");
    }
    checked
    {
        int sum = 0;
        foreach (T item in source)
        {
            sum += selector(item).GetValueOrDefault();
        }
        return sum;
    }
}

Note the use of Nullable<T>.GetValueOrDefault() to "ignore" null values – it felt easier to add zero than to use an "if" block here. I suspect it’s also more efficient, as there’s no need for any conditionality here: I’d expect the implementation of GetValueOrDefault() to just return the underlying "value" field within the Nullable<T>, without performing the check for HasValue which the Value property normally would.

Of course, if I were really bothered by performance I’d implement each operation separately, instead of using the identity projection.

Note the use of the "checked" block to make sure that overflow is handled appropriately. As I’ve mentioned before, it would quite possibly be a good idea to turn overflow checking on for the whole assembly, but here I feel it’s worth making it explicit to show that we consider overflow as an important part of the behaviour of this operator. The single/double overloads don’t use checked blocks, as their overflow behaviour isn’t affected by the checked context.

Conclusion

One down, three to go! I suspect Min and Max will use even more cutting and pasting (with judiciously applied changes, of course). There are 22 overloads for each of those operators, due to the possibility of using an arbitrary type – but I may well be able to use the most generic form to implement all the numeric versions. I may measure the impact this has on performance before deciding for sure. Anyway, that’s a topic for the next post…

Addendum

As has been pointed out in the comments, my original implementation used a float to accumulate values when summing a sequence of floats. This causes problems, as these two new tests demonstrate:

[Test]
public void NonOverflowOfComputableSumSingle()
{
    float[] source = { float.MaxValue, float.MaxValue,
                      –float.MaxValue, –float.MaxValue };
    // In a world where we summed using a float accumulator, the
    // result would be infinity.
    Assert.AreEqual(0f, source.Sum());
}

[Test]
public void AccumulatorAccuracyForSingle()
{
    // 20000000 and 20000004 are both exactly representable as
    // float values, but 20000001 is not. Therefore if we use
    // a float accumulator, we’ll end up with 20000000. However,
    // if we use a double accumulator, we’ll get the right value.
    float[] array = { 20000000f, 1f, 1f, 1f, 1f };
    Assert.AreEqual(20000004f, array.Sum());
}

The second of these tests is specific to floating point arithmetic – there’s no equivalent in the integer domain. Hopefully the comment makes the test clear. We could still do better if we used the Kahan summation algorithm, but I haven’t implemented that yet, and don’t currently intend to. Worth noting as a potential follow-on project though.

Back to the first test though: this certainly can be represented in integers. If we try to sum { int.MaxValue, int.MaxValue, -int.MaxValue, -int.MaxValue } there are two options: we can overflow (throwing an exception) or we can return 0. If we use a long accumulator, we’ll return 0. If we use an int accumulator, we’ll overflow. I genuinely didn’t know what the result for LINQ to Objects would be until I tried it – and found that it overflows. I’ve added a test to document this behaviour:

[Test]
public void OverflowOfComputableSumInt32()
{
    int[] source = { int.MaxValue, 1, -1, –int.MaxValue };
    // In a world where we summed using a long accumulator, the
    // result would be 0.
    Assert.Throws<OverflowException>(() => source.Sum());
}

Of course, I could have gone my own way and made Edulinq more capable than LINQ to Objects here, but in this case I’ve gone with the existing behaviour.

Reimplementing LINQ to Objects: Part 27 – Reverse

Time for a change of pace after the deep dive into sorting. Reversing is pretty simple… which is not to say there’s nothing to discuss, of course.

What is it?

Reverse only has a single, simple signature:

public static IEnumerable<TSource> Reverse<TSource>(
    this IEnumerable<TSource> source)

The behaviour is pretty simple to describe:

  • source cannot be null; this is validated eagerly.
  • The operator uses deferred execution: until you start reading from the result, it won’t read anything from the input sequence
  • As soon as you start reading from the result sequence, the input sequence is read in its entirety
  • The result sequence contains all the elements of the input sequence, in the opposite order. (So the first element of the result sequence is the last element of the input sequence.)

The third point is the most interesting. It sounds like an obvious requirement just to get it to work at all – until you think of possible optimizations. Imagine if you implemented Reverse with an optimization for arrays: we know the array won’t change size, and we can find out that size easily enough – so we could just use the indexer on each iteration, starting off with an index of "length – 1" and decrementing until we’d yielded every value.

LINQ to Objects doesn’t behave this way – and that’s observable because if you change the value of the array after you start iterating over the result sequence, you don’t see those changes. Deferred execution means that you will see changes made to the array after the call to Reverse but before you start iterating over the results, however.

Note that the buffering nature of this operator means that you can’t use it on infinite sequences – which makes sense when you think about it. What’s the last element of an infinite sequence?

What are we going to test?

Most of the tests are pretty obvious, but I have one test to demonstrate how the timing of changes to the contents of the input sequence affect the result sequence:

[Test]
public void ArraysAreBuffered()
{
    // A sneaky implementation may try to optimize for the case where the collection
    // implements IList or (even more "reliable") is an array: it mustn’t do this,
    // as otherwise the results can be tainted by side-effects within iteration
    int[] source = { 0, 1, 2, 3 };

    var query = source.Reverse();
    source[1] = 99; // This change *will* be seen due to deferred execution
    using (var iterator = query.GetEnumerator())
    {
        iterator.MoveNext();
        Assert.AreEqual(3, iterator.Current);

        source[2] = 100; // This change *won’t* be seen               
        iterator.MoveNext();
        Assert.AreEqual(2, iterator.Current);

        iterator.MoveNext();
        Assert.AreEqual(99, iterator.Current);

        iterator.MoveNext();
        Assert.AreEqual(0, iterator.Current);
    }
}

If you can think of any potentially-surprising tests, I’d be happy to implement them – there wasn’t much I could think of in terms of corner cases.

Let’s implement it!

Eager validation of source combined with deferred execution suggests the normal implementation of splitting the operator into two methods – I won’t bother showing the public part, as it only does exactly what you’d expect it to. However, to make up for the fact that Reverse is so simple, I’ll present three implementations of the "Impl" method.

First, let’s use a collection which performs the reversing for us automatically: a stack. The iterator returned by Stack<T> returns items in the order in which they would be seen by multiple calls to Pop – i.e. the reverse of the order in which they were added. This makes the implementation trivial:

private static IEnumerable<TSource> ReverseImpl<TSource>(IEnumerable<TSource> source)
{
    Stack<TSource> stack = new Stack<TSource>(source);
    foreach (TSource item in stack)
    {
        yield return item;
    }
}

Again, with "yield foreach" we could have done this in a single statement.

Next up, a linked list. In some ways, using a linked list is very natural – you never need to resize an array, or anything like that. On the other hand, we have an extra node object for every single element, which is a massive overhead. It’s not what I’d choose to use for production in this case, but it’s worth showing:

private static IEnumerable<TSource> ReverseImpl<TSource>(IEnumerable<TSource> source)
{
    LinkedList<TSource> list = new LinkedList<TSource>(source);
    LinkedListNode<TSource> node = list.Last; // Property, not method!
    while (node != null)
    {
        yield return node.Value;
        node = node.Previous;
    }
}

Finally, a more "close to the metal" approach using our existing ToBuffer method:

private static IEnumerable<TSource> ReverseImpl<TSource>(IEnumerable<TSource> source)
{
    int count;
    TSource[] array = source.ToBuffer(out count);
    for (int i = count – 1; i >= 0; i–)
    {
        yield return array[i];
    }
}

This is probably not significantly more efficient than the version using Stack<T> – I expect Stack<T> has a similar implementation to ToBuffer when it’s constructed with an input sequence. However, as it’s so easy to count down from the end of the array to the start, we don’t really need to take advantage of any of the features of Stack – so we might as well just use the array directly.

Note that this relies on the fact that ToBuffer will create a copy of whatever it’s given, including an array. That’s okay though – we’re relying on that all over the place :)

Conclusion

It’s hard to see how this could really be optimized any further, other than by improving ToBuffer based on usage data. Overall, a lovely simple operator.

It’s probably about time I tackled some of the arithmetic aggregation operators… so next time I’ll probably implement Sum.

Reimplementing LINQ to Objects: Part 26d – Fixing the key selectors, and yielding early

I feel I need a voice over. "Previously, on reimplementing LINQ to Objects…" Well, we’d got as far as a working implementation of OrderedEnumerable which didn’t have terrible performance – unless you had an expensive key selector. Oh, and it didn’t make use of the fact that we may only want the first few results.

Executing key selectors only once

Our first problem is to do with the key selectors. For various reasons (mentioned in part 26b) life is better if we execute the key selector once per input element. While we can do that with lazy evaluation, it makes more sense in my opinion to do it up-front. That means we need to separate out the key selector from the key comparer – in other words, we need to get rid of the handy ProjectionComparer we used to simplify the arguments to OrderBy/ThenBy/etc.

If we’re going to keep the key selectors in a strongly typed way, that means our OrderedEnumerable (or at least some type involved in the whole business) needs to become generic in the key type. Let’s bite the bullet and make it OrderedEnumerable. Now we have a slight problem right away in the fact that the "CreateOrderedEnumerable" method is generic, introducing a new type parameter TKey… so we shouldn’t use TKey as the name of the new type parameter for OrderedEnumerable. We could rename the type parameter in the generic method implementation, but I’m becoming a big believer in leaving the signatures of methods alone when I implement an interface. For type parameters it’s not too bad, but for normal parameters it can be awful if you mess around with the names – particularly for those using named arguments.

Thinking ahead, our single "key" type parameter in OrderedEnumerable could well end up being a composite key. After all, if we have OrderBy(…).ThenBy(…).ThenBy(…) we’re going to have to have some way of representing the key formed by the three selectors. It makes sense to use a "nested" key type, where the key type of OrderedEnumerable is always the "composite key so far". Thus I named the type parameter TCompositeKey, and introduced an appropriate field. Here’s the skeleton of the new class:

internal class OrderedEnumerable<TElement, TCompositeKey> : IOrderedEnumerable<TElement>
{
    private readonly IEnumerable<TElement> source;
    private readonly Func<TElement, TCompositeKey> compositeSelector;
    private readonly IComparer<TCompositeKey> compositeComparer;

    internal OrderedEnumerable(IEnumerable<TElement> source,
        Func<TElement, TCompositeKey> compositeSelector,
        IComparer<TCompositeKey> compositeComparer)
    {
        this.source = source;
        this.compositeSelector = compositeSelector;
        this.compositeComparer = compositeComparer;
    }

    // Interface implementations here
}

(I’m aware this is very "stream of consciousness" – I’m assuming that presenting the decisions in the order in which I addressed them is a good way of explaining the necessary changes. Apologies if the style doesn’t work for you.)

ThenBy and ThenByDescending don’t have to change at all – they were already just using the interface. OrderBy and OrderByDescending become a little simpler, as we don’t need to build the projection comparer. Here’s the new version of OrderBy:

public static IOrderedEnumerable<TSource> OrderBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (keySelector == null)
    {
        throw new ArgumentNullException("keySelector");
    }
    return new OrderedEnumerable<TSource, TKey>
        (source, keySelector, comparer ?? Comparer<TKey>.Default);
}

Lovely – we just call a constructor, basically.

So far, so good. Now what about the implementation of IOrderedEnumerable? We should expect this to get messy, because there are three types of key involved:

  • The current key type
  • The secondary key type
  • The composite key type

Currently we don’t even have a type which can represent the composite key. We could use something like KeyValuePair<TKey, TValue>, but that doesn’t really give the right impression. Instead, let’s create our own simple type:

internal struct CompositeKey<TPrimary, TSecondary>
{
    private readonly TPrimary primary;
    private readonly TSecondary secondary;

    internal TPrimary Primary { get { return primary; } }
    internal TSecondary Secondary{ get { return secondary; } }

    internal CompositeKey(TPrimary primary, TSecondary secondary)
    {
        this.primary = primary;
        this.secondary = secondary;
    }
}

Now we can easily create a projection from two key selectors to a new one which selects a composite key. However, we’ll need to do the same thing for a comparer. We could use the CompoundComparer class we created before, but that will end up with quite a bit of indirection. Instead, it would be nice to have a type to work directly with CompositeKey – something which knew it was dealing with comparers of different types, one for each part of the key.

We could create a completely separate top-level type for that… but specifying the type parameters again seems a bit daft when we can reuse them by simply creating a nested class within CompositeKey:

internal struct CompositeKey<TPrimary, TSecondary>
{
    // Other members as shown above

    internal sealed class Comparer : IComparer<CompositeKey<TPrimary, TSecondary>>
    {
        private readonly IComparer<TPrimary> primaryComparer;
        private readonly IComparer<TSecondary> secondaryComparer;

        internal Comparer(IComparer<TPrimary> primaryComparer,
                          IComparer<TSecondary> secondaryComparer)
        {
            this.primaryComparer = primaryComparer;
            this.secondaryComparer = secondaryComparer;
        }

        public int Compare(CompositeKey<TPrimary, TSecondary> x,
                           CompositeKey<TPrimary, TSecondary> y)
        {
            int primaryResult = primaryComparer.Compare(x.Primary, y.Primary);
            if (primaryResult != 0)
            {
                return primaryResult;
            }
            return secondaryComparer.Compare(x.Secondary, y.Secondary);
        }
    }
}

This may look a little odd to begin with, but the two types really are quite deeply connected.

Now that we can compose keys in terms of both selection and comparison, we can implement CreateOrderedEnumerable:

public IOrderedEnumerable<TElement> CreateOrderedEnumerable<TKey>(
    Func<TElement, TKey> keySelector,
    IComparer<TKey> comparer,
    bool descending)
{
    if (keySelector == null)
    {
        throw new ArgumentNullException("keySelector");
    }
    comparer = comparer ?? Comparer<TKey>.Default;
    if (descending)
    {
        comparer = new ReverseComparer<TKey>(comparer);
    }

    // Copy to a local variable so we don’t need to capture "this"
    Func<TElement, TCompositeKey> primarySelector = compositeSelector;
    Func<TElement, CompositeKey<TCompositeKey, TKey>> newKeySelector = 
        element => new CompositeKey<TCompositeKey, TKey>(primarySelector(element), keySelector(element));

    IComparer<CompositeKey<TCompositeKey, TKey>> newKeyComparer =
        new CompositeKey<TCompositeKey, TKey>.Comparer(compositeComparer, comparer);

    return new OrderedEnumerable<TElement, CompositeKey<TCompositeKey, TKey>>
        (source, newKeySelector, newKeyComparer);
}

I’m not going to pretend that the second half of the method is anything other than ghastly. I’m not sure I’ve ever written code which is so dense in type arguments. IComparer<CompositeKey<TCompositeKey, TKey>> is a particularly "fine" type. Ick.

However, it works – and once you’ve got your head round what each of the type parameters actually means at any one time, it’s not really complicated code – it’s just verbose and clunky.

The only bit which might require a bit of explanation is the primarySelector variable. I could certainly have just used compositeSelector within the lambda expression used to create the new key selector – it’s not like it’s going to change, after all. The memory benefits of not having a reference to "this" (where the intermediate OrderedEnumerable is likely to be eligible for GC collection immediately, in a typical OrderBy(…).ThenBy(…) call) are almost certainly not worth it. It just feels right to have both the primary and secondary key selectors in the same type, which is what will happen with the current code. They’re both local variables, they’ll be captured together, all will be well.

I hope you can see the parallel between the old code and the new code. Previously we composed a new (element-based) comparer based on the existing comparer, and a projection comparer from the method parameters. Now we’re composing a new key selector and a new key comparer. It’s all the same idea, just maintaining the split between key selection and key comparison.

Now let’s sort…

So far, we haven’t implemented GetEnumerator – and that’s all. As soon as we’ve done that to our satisfaction, we’re finished with ordering.

There are several approaches to how we could sort. Here are a few of them:

  • Project each element to its key, and create a KeyValuePair for each item. Merge sort in the existing way to achieve stability. This will involve copying a lot of data around – particularly if the element and key types end up being large value types.
  • Project each element to a { key, index } pair, and create another composite comparer which uses the index as a tie-breaker to achieve stability. This still involves copying keys around, but it means we could easily use a built-in sort (such as List<T>).
  • Project each element to a key, and separately create an array of indexes (0, 1, 2, 3…). Sort the indexes by accessing the relevant key at any point, using indexes as tie-breakers. This requires a more fiddly sort, as we need to keep indexing into the indexes array.
  • Build up "chunks" of sorted data as we read it in, keeping some number of chunks and merging them appropriate when we want to. We can then yield the results without ever performing a full sort, by effectively performing the "merge" operation of merge sort, just yielding values instead of copying them to temporary storage. (Obviously this is trivial with 2 chunks, but can be extended to more.)
  • Do something involving a self-balancing binary tree :)

I decided to pick the middle option, using quicksort as the sorting algorithm. This comes with the normal problems of possibly picking bad pivots, but it’s usually a reasonable choice. I believe there are cunning ways of improving the worst-case performance, but I haven’t implemented any of those.

Here’s the non-quicksort part of the code, just to set the scene.

public IEnumerator<TElement> GetEnumerator()
{
    // First copy the elements into an array: don’t bother with a list, as we
    // want to use arrays for all the swapping around.
    int count;
    TElement[] data = source.ToBuffer(out count);

    int[] indexes = new int[count];
    for (int i = 0; i < indexes.Length; i++)
    {
        indexes[i] = i;
    }

    TCompositeKey[] keys = new TCompositeKey[count];
    for (int i = 0; i < keys.Length; i++)
    {
        keys[i] = compositeSelector(data[i]);
    }

    QuickSort(indexes, keys, 0, count – 1);

    for (int i = 0; i < indexes.Length; i++)
    {
        yield return data[indexes[i]];
    }
}

I could certainly have combined the first two loops – I just liked the separation provided in this code. One tiny micro-optimization point to note is that for each loop I’m using the Length property of the array rather than "count" as the upper bound, as I believe that will reduce the amount of array boundary checking the JIT will generate. I very much doubt that it’s relevant, admittedly :) I’ve left the code here as it is in source control – but looking at it now, I could certainly have used a foreach loop on the final yield part. We wouldn’t be able to later, admittedly… but I’ll come to that all in good time.

The actual quicksort part is reasonably standard except for the fact that I pass in both the arrays for both indexes and keys – usually there’s just the one array which is being sorted. Here’s the code for both the recursive call and the partition part:

private void QuickSort(int[] indexes, TCompositeKey[] keys, int left, int right)
{
    if (right > left)
    {
        int pivot = left + (right – left) / 2;
        int pivotPosition = Partition(indexes, keys, left, right, pivot);
        QuickSort(indexes, keys, left, pivotPosition – 1);
        QuickSort(indexes, keys, pivotPosition + 1, right);
    }
}

private int Partition(int[] indexes, TCompositeKey[] keys, int left, int right, int pivot)
{
    // Remember the current index (into the keys/elements arrays) of the pivot location
    int pivotIndex = indexes[pivot];
    TCompositeKey pivotKey = keys[pivotIndex];

    // Swap the pivot value to the end
    indexes[pivot] = indexes[right];
    indexes[right] = pivotIndex;
    int storeIndex = left;
    for (int i = left; i < right; i++)
    {
        int candidateIndex = indexes[i];
        TCompositeKey candidateKey = keys[candidateIndex];
        int comparison = compositeComparer.Compare(candidateKey, pivotKey);
        if (comparison < 0 || (comparison == 0 && candidateIndex < pivotIndex))
        {
            // Swap storeIndex with the current location
            indexes[i] = indexes[storeIndex];
            indexes[storeIndex] = candidateIndex;
            storeIndex++;
        }
    }
    // Move the pivot to its final place
    int tmp = indexes[storeIndex];
    indexes[storeIndex] = indexes[right];
    indexes[right] = tmp;
    return storeIndex;
}

It’s interesting to observe how similar the quicksort and merge sort recursive parts are – both picking a midpoint, recursing on the left of it, recursing on the right of it, and performing some operation on the whole sublist. Of course the "some operation" is very different between partition and merge, and it occurs at a different time – but it’s an interesting parallel nonetheless.

One significant difference between merge sort and quicksort is the use of the pivot. Once Partition has returned where the pivot element ended up, quicksort doesn’t touch that element itself (we already know it will be in the right place). It recurses on the sublist entirely to the left of the pivot and the sublist entirely to the right of the pivot. Compare this with merge sort with recurses on two sublists which together comprise the whole list for that call.

The overloading of the word "index" here is unfortunate, but that is unfortunately life. Both sorts of "index" here really are indexes… you just need to keep an eye on which is which.

The final point to note is how we’re using the indexes in the comparison, as a tie-break to keep stability. It’s an ugly expression, but it does the job.

(As a small matter of language, I wasn’t sure whether to use indexes or indices. I far prefer the former, so I used it. Having just checked in the dictionary, it appears both are correct. This reminds me of when I was writing C# in Depth – I could never decide between appendixes and appendices. Blech.)

Now, do you want to hear the biggest surprise I received last night? After I’d fixed up the compile-time errors to arrive at the code above, it worked first time. I’m not kidding. I’m not quite sure how I pulled that off (merge sort didn’t take long either, but it did at least have a few tweaks to fix up) but it shocked the heck out of me. So, are we done? Well, not quite.

Yielding early

Just as a reminder, one of my aims was to be able to use iterator blocks to return some values to anyone iterating over the result stream without having to do all the sorting work. This means that in the case of calling OrderBy(…).Take(5) on a large collection, we can end up saving a lot of work… I hope!

This is currently fairly normal quicksort code, leaving the "dual arrays" aspect aside… but it’s not quite amenable to early yielding. We’re definitely computing the earliest results first, due to the order of the recursion – but we can’t yield from the recursive method – iterator blocks just don’t do that.

So, we’ll have to fake the recursion. Fortunately, quicksort is only directly recursive – we don’t need to worry about mutually recursive routines: A calling B which might call C or it might call back to A, etc. Instead, we can just keep a Stack<T> of "calls" to quicksort that we want to make, and execute the appropriate code within our GetEnumerator() method, so we can yield at the right point. Now in the original code, quicksort has four parameters, so you might expect our Stack<T> to have those four values within T too… but no! Two of those values are just the keys and indexes… and we already have those in two local variables. We only need to keep track of "right" and "left". Again, for the sake of clarity I decided to implement this using a custom struct – nested within OrderedEnumerable as there’s no need for it to exist anywhere else:

private struct LeftRight
{
    internal int left, right;
    internal LeftRight(int left, int right)
    {
        this.left = left;
        this.right = right;
    }
}

Purists amongst you may curse at the use of internal fields rather than properties. I’m not bothered – this is a private class, and we’re basically using this as a tuple. Heck, I would have used anonymous types if it weren’t for two issues:

  • I wanted to use Stack<T>, and there’s no way of creating one of those for an anonymous type (without introducing more generic methods to use type inference)
  • I wanted to use a struct – we’ll end up creating a lot of these values, and there’s simply no sense in them being individual objects on the heap. Anonymous types are always classes.

So, as a first step we can transform our code to use this "fake recursion" but still yield at the very end:

var stack = new Stack<LeftRight>();
stack.Push(new LeftRight(0, count – 1));
while (stack.Count > 0)
{
    LeftRight leftRight = stack.Pop();
    int left = leftRight.left;
    int right = leftRight.right;
    if (right > left)
    {
        int pivot = left + (right – left) / 2;
        int pivotPosition = Partition(indexes, keys, left, right, pivot);
        stack.Push(new LeftRight(pivotPosition + 1, right));
        stack.Push(new LeftRight(left, pivotPosition – 1));
    }
}

for (int i = 0; i < indexes.Length; i++) 

    yield return data[indexes[i]]; 
}

We initially push a value of (0, count – 1) to simulate the call to QuickSort(0, count – 1) which started it all before. The code within the loop is very similar to the original QuickSort method, with three changes:

  • We have to grab the next value of LeftRight from the stack, and then separate it into left and right values
  • Instead of calls to QuickSort, we have calls to stack.Push
  • We’ve reversed the order of the recursive calls: in order to sort the left sublist first, we have to push it onto the stack last.

Happy so far? We’re getting very close now. All we need to do is work out when to yield. This is the bit which caused me the most headaches, until I worked out that the "if (right > left)" condition really meant "if we’ve got work to do"… and we’re interested in the exact opposite scenario – when we don’t have any work to do, as that means everything up to and including "right" is already sorted. There are two situations here: either right == left, i.e. we’re sorting one element, or right == left – 1, which will occur if we picked a pivot which was the maximum or minimum value in the list at the previous recursive step.

It’s taken me a little bit of thinking (and just running the code) to persuade me that we will always naturally reach a situation where we end up seeing right == count and right <= left, i.e. a place where we know we’re completely done. But it’s okay – it does happen.

It’s not just a case of yielding the values between left and right though – because otherwise we’d never yield a pivot. Remember how I pointed out that quick sort missed out the pivot when specifying the sublists to recurse into? Well, that’s relevant here. Fortunately, it’s really easy to work out what to do. Knowing that everything up to and including "right" has been sorted means we just need to keep a cursor representing the next index to yield, and then just move that cursor up until it’s positioned beyond "right". The code is probably easier to understand than the description:

int nextYield = 0;

var stack = new Stack<LeftRight>();
stack.Push(new LeftRight(0, count – 1));
while (stack.Count > 0)
{
    LeftRight leftRight = stack.Pop();
    int left = leftRight.left;
    int right = leftRight.right;
    if (right > left)
    {
        int pivot = left + (right – left) / 2;
        int pivotPosition = Partition(indexes, keys, left, right, pivot);
        // Push the right sublist first, so that we *pop* the
        // left sublist first
        stack.Push(new LeftRight(pivotPosition + 1, right));
        stack.Push(new LeftRight(left, pivotPosition – 1));
    }
    else
    {
        while (nextYield <= right)
        {
            yield return data[indexes[nextYield]];
            nextYield++;
        }
    }
}

Tada! It works (at least according to my tests).

I have tried optimizing this a little further, to deal with the case when right == left + 1, i.e. we’re only sorting two elements. It feels like that ought to be cheaper to do explicitly than via pivoting and adding two pointless entries to the stack… but the code gets a lot more complicated (to the point where I had to fiddle significantly to get it working) and from what I’ve seen, it doesn’t make much performance difference. Odd. If this were a production-quality library to be used in performance-critical situations I’d go further in the testing, but as it is, I’m happy to declare victory at this point.

Performance

So, how well does it perform? I’ve only performed crude tests, and they perplex me somewhat. I’m sure that last night, when I was running the "yield at the end" code, my tests were running twice as slowly in Edulinq as in LINQ to Objects. Fair enough – this is just a hobby, Microsoft have no doubt put a lot of performance testing effort into this. (That hasn’t stopped them from messing up "descending" comparers, admittedly, as I found out last night to my amusement.) That was on my "meaty" laptop (which is 64-bit with a quad core i7). On my netbook this morning, the same Edulinq code seemed to be running slightly faster than LINQ to Objects. Odd.

This evening, having pulled the "early out" code from the source repository, the Edulinq implementation is running faster than the LINQ to Objects implementation even when the "early out" isn’t actually doing much good. That’s just plain weird. I blame my benchmarking methodology, which is far from rigorous. I’ve tweaked the parameters of my tests quite a bit, but I haven’t tried all kinds of different key and element types, etc. The basic results are very roughly:

  • When evaluating the whole ordered list, Edulinq appears to run about 10% faster than LINQ to Objects
  • When evaluating only the top 5 of a large ordered list, Edulinq can be much faster. How much faster depends on the size of the list of course, and it still has to perform the initial complete partitioning step – but on 100,000 items it’s regularly about 10x faster than LINQ to Objects.

That makes me happy :) Of course, the code is all open source, so if Microsoft wish to include the Edulinq implementation in .NET 5, they’re quite at liberty to do so, as long as they abide by the terms of the licence. I’m not holding my breath ;)

More seriously, I fully expect there are a bunch of scenarios where my knocked-up-in-an-evening code performs slower than that in the framework. Maybe my approach takes a lot more memory. Maybe it has worse locality of reference in some scenarios. There are all kinds of possibilities here. Full performance analysis was never meant to be the goal of Edulinq. I’m doing this in the spirit of learning more about LINQ – but it’s fun to try to optimize just a little bit. I’m going to delete the increasingly-inaccurately-named MergeSortTest project now – I may institute a few more benchmarks later on though. I’m also removing CompoundComparer and ProjectionComparer, which are no longer used. They’ll live on in part 26a though…

Conclusions

Well that was fun, wasn’t it? I’m pretty pleased with the result. The final code has some nasty generic complexity in it, but it’s not too bad if you keep all the types clear in your mind.

None of the remaining operators will be nearly as complex as this, unless I choose to implement AsQueryable (which I wasn’t planning on doing). On the other hand, as I’ve mentioned before, Max/Sum/etc have oodles of overloads. While I’ll certainly implement all of them, I’m sure I’ll only present the code for selected interesting overloads.

As a bit of light relief, I think I’ll tackle Reverse. That’s about as simple as it gets – although it could still present some interesting options.

Addendum

An earlier version of this post (and the merge sort implementation) had a flawed piece of code for choosing the pivot. Here’s both the old and the new code:

// Old code
int pivot = (left + right) / 2;

// New code
int pivot = left + (right – left) / 2;

The difference is whether or not the code can overflow when left and right are very large. Josh Bloch wrote about it back in 2006. A colleague alerted me to this problem shortly after posting, but it’s taken until now to correct it. (I fixed the source repository almost immediately, but deferred writing this addendum.) Why was I not too worried? Because .NET restricts each object to be less than 2GB in size, even in .NET 4.0, even on a 64-bit CLR. As we’ve created an array of integers, one per entry, that means we can only have just under (int.MaxValue / 4) elements. Within those limits, there’s no problem in the original pivot code. However, it’s still worth fixing of course – one never knows when the restriction will be lifted. The CLR team blogged about the issue back in 2005 (when the 64-bit CLR was new) – I haven’t seen any mentions of plans to remove the limitation, but I would imagine it’s discussed periodically.

One oddity about this is that the Array class itself has some API support for large arrays, such as the LongLength property. To be honest, I can’t see large arrays ever being particularly pleasant to work with – what would they return for the normal Length property, for example, or their implementation of IList<T> etc? I suspect we may see support for larger objects before we see support for arrays with more than int.MaxValue elements, but that’s a complete guess.

Reimplementing LINQ to Objects: Part 26c – Optimizing OrderedEnumerable

Part 26b left us with a working implementation of the ordering operators, with two caveats:

  • The sort algorithm used was awful
  • We were performing the key selection on every comparison, instead of once to start with

Today’s post is just going to fix the first bullet – although I’m pretty sure that fixing the second will require changing it again completely.

Choosing a sort algorithm

There are lots of sort algorithms available. In our case, we need the eventual algorithm to:

  • Work on arbitrary pair-based comparisons
  • Be stable
  • Go like the clappers :)
  • (Ideally) allow the first results to be yielded without performing all the sorting work, and without affecting the performance in cases where we do need all the results.

The final bullet it an interesting one to me: it’s far from unheard of to want to get the "top 3" results from an ordered query. In LINQ to Objects we can’t easily tell the Take operator about the OrderBy operator so that it could pass on the information, but we can potentially yield the first results before we’ve sorted everything. (In fact, we could add an extra interface specifically to enable this scenario, but it’s not part of normal LINQ to Objects, and could introduce horrible performance effects with innocent-looking query changes.)

If we decide to implement sorting in terms of a naturally stable algorithm, that limits the choices significantly. I was rather interested in timsort, and may one day set about implementing it – but it looked far too complicated to introduce just for the sake of Edulinq.

The best bet seemed to be merge sort, which is reasonably easy to implement and has reasonable efficiency too. It requires extra memory and a fair amount of copying, but we can probably cope with that.

We don’t have to use a stable sort, of course. We could easily regard our "key" as the user-specified key plus the original index, and use that index as a final tie-breaker when comparing elements. That gives a stable result while allowing us to use any sorting algorithm we want. This may well be the approach I take eventually – especially as quicksort would allow us to start yielding results early in a fairly simple fashion. For the moment though, I’ll stick with merge sort.

Preparing for merge sort

Just looking from the algorithm for merge sort, it’s obvious that there will be a good deal of shuffling data around. As we want to make the implementation as fast as possible, that means it makes sense to use arrays to store the data. We don’t need dynamic space allocation (after we’ve read all the data in, anyway) or any of the other features associated with higher-level collections. I’m aware that arrays are considered (somewhat) harmful, but purely for the internals of an algorithm which does so much data access, I believe they’re the most appropriate solution.

We don’t even need our arrays to be the right size – assuming we need to read in all the data before we start processing it (which will be true for this implementation of merge sort, but not for some other algorithms I may consider in the future) it’s fine to use an oversized array as temporary storage – it’s never going to be seen by the users, after all.

We’ve already got code which reads in all the data into a possibly-oversized array though – in the optimized ToArray code. So my first step was to extract out that functionality into a new internal extension method. This has to return a buffer containing all the data and give us an indication of the size. In .NET 4 I could use Tuple to return both pieces of data, but we can also just use an out parameter – I’ve gone for the latter approach at the moment. Here’s the ToBuffer extension method:

internal static TSource[] ToBuffer<TSource>(this IEnumerable<TSource> source, out int count)
{
    // Optimize for ICollection<T>
    ICollection<TSource> collection = source as ICollection<TSource>;
    if (collection != null)
    {
        count = collection.Count;
        TSource[] tmp = new TSource[count];
        collection.CopyTo(tmp, 0);
        return tmp;
    }

    // We’ll have to loop through, creating and copying arrays as we go
    TSource[] ret = new TSource[16];
    int tmpCount = 0;
    foreach (TSource item in source)
    {
        // Need to expand…
        if (tmpCount == ret.Length)
        {
            Array.Resize(ref ret, ret.Length * 2);
        }
        ret[tmpCount++] = item;
    }
    count = tmpCount;
    return ret;
}

Note that I’ve used a local variable to keep track of the count in the loop near the end, only copying it into the output variable just before returning. This is due to a possibly-unfounded performance concern: we don’t know where the variable will actually "live" in storage – and I’d rather not cause some arbitrary page of heap memory to be required all the way through the loop. This is a gross case of micro-optimization without evidence, and I’m tempted to remove it… but I thought I’d at least share my thinking.

This is only an internal API, so I’m trusting callers not to pass me a null "source" reference. It’s possible that it would be a useful operator to expose at some point, but not just now. (If it were public, I would definitely use a local variable in the loop – otherwise callers could get weird effects by passing in a variable which could be changed elsewhere – such as due to side-effects within the loop. That’s a totally avoidable problem, simply by using a local variable. For an internal API, I just need to make sure that I don’t do anything so silly.)

Now ToArray needs to be changed to call ToBuffer, which is straightforward:

public static TSource[] ToArray<TSource>(this IEnumerable<TSource> source)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    int count;
    TSource[] ret = source.ToBuffer(out count);
    // Now create another copy if we have to, in order to get an array of the
    // right size
    if (count != ret.Length)
    {
        Array.Resize(ref ret, count);
    }
    return ret;
}

then we can prepare our OrderedEnumerable.GetEnumerator method for merging:

public IEnumerator<TElement> GetEnumerator()
{
    // First copy the elements into an array: don’t bother with a list, as we
    // want to use arrays for all the swapping around.
    int count;
    TElement[] data = source.ToBuffer(out count);
    TElement[] tmp = new TElement[count];
            
    MergeSort(data, tmp, 0, count – 1);
    for (int i = 0; i < count; i++)
    {
        yield return data[i];
    }
}

The "tmp" array is for use when merging – while there is an in-place merge sort, it’s more complex than the version where the "merge" step merges two sorted lists into a combined sorted list in temporary storage, then copies it back into the original list.

The arguments of 0 and count – 1 indicate that we want to sort the whole list – the parameters to my MergeSort method take the "left" and "right" boundaries of the sublist to sort – both of which are inclusive. Most of the time I’m more used to using exclusive upper bounds, but all the algorithm descriptions I found used inclusive upper bounds – so it made it easier to stick with that than try to "fix" the algorithm to use exclusive upper bounds everywhere. I think it highly unlikely that I’d get it all right without any off-by-one errors :)

Now all we’ve got to do is write an appropriate MergeSort method, and we’re done.

Implementing MergeSort

I won’t go through the details of how a merge sort works – read the wikipedia article for a pretty good description. In brief though, the MergeSort method guarantees that it will leave the specified portion of the input data sorted. It does this by splitting that section in half, and recursively merge sorting each half. It then merges the two halves by walking along two cursors (one from the start of each subsection) finding the smallest element out of the two at each point, copying that element into the temporary array and advancing just that cursor. When it’s finished, the temporary storage will contain the sorted section, and it’s copied back to the "main" array. The recursion has to stop at some point, of course – and in my implementation it stops if the section has fewer than three elements.

Here’s the MergeSort method itself first:

// Note: right is *inclusive*
private void MergeSort(TElement[] data, TElement[] tmp, int left, int right)
{
    if (right > left)
    {
        if (right == left + 1)
        {
            TElement leftElement = data[left];
            TElement rightElement = data[right];
            if (currentComparer.Compare(leftElement, rightElement) > 0)
            {
                data[left] = rightElement;
                data[right] = leftElement;
            }
        }
        else
        {
            int mid = left + (right – left) / 2;
            MergeSort(data, tmp, left, mid);
            MergeSort(data, tmp, mid + 1, right);
            Merge(data, tmp, left, mid + 1, right);
        }
    }
}

The test for "right > left" is part of a vanilla merge sort (if the section either has one element or none, we don’t need to take any action), but I’ve optimized the common case of only two elements. All we need to do is swap the elements – and even then we only need to do so if they’re currently in the wrong order. There’s no point in setting up all the guff of the two cursors – or even have the slight overhead of a method call – for that situation.

Other than that one twist, this is a pretty standard merge sort. Now for the Merge method, which is slightly more complicated (although still reasonably straighforward):

private void Merge(TElement[] data, TElement[] tmp, int left, int mid, int right)
{
    int leftCursor = left;
    int rightCursor = mid;
    int tmpCursor = left;
    TElement leftElement = data[leftCursor];
    TElement rightElement = data[rightCursor];
    // By never merging empty lists, we know we’ll always have valid starting points
    while (true)
    {
        // When equal, use the left element to achieve stability
        if (currentComparer.Compare(leftElement, rightElement) <= 0)
        {
            tmp[tmpCursor++] = leftElement;
            leftCursor++;
            if (leftCursor < mid)
            {
                leftElement = data[leftCursor];
            }
            else
            {
                // Only the right list is still active. Therefore tmpCursor must equal rightCursor,
                // so there’s no point in copying the right list to tmp and back again. Just copy
                // the already-sorted bits back into data.
                Array.Copy(tmp, left, data, left, tmpCursor – left);
                return;
            }
        }
        else
        {
            tmp[tmpCursor++] = rightElement;
            rightCursor++;
            if (rightCursor <= right)
            {
                rightElement = data[rightCursor];
            }
            else
            {
                // Only the left list is still active. Therefore we can copy the remainder of
                // the left list directly to the appropriate place in data, and then copy the
                // appropriate portion of tmp back.
                Array.Copy(data, leftCursor, data, tmpCursor, mid – leftCursor);
                Array.Copy(tmp, left, data, left, tmpCursor – left);
                return;
            }
        }
    }
}

Here, "mid" is the exclusive upper bound of the left subsection, and the inclusive lower bound of the right subsection… whereas "right" is the inclusive upper bound of the right subsection. Again, it’s possible that this is worth tidying up at some point to be more consistent, but it’s not too bad.

This time there’s a little bit more special-casing. We take the approach that whichever sequence runs out first (which we can detect as soon as the "currently advancing" cursor hits its boundary), we can optimize what still has to be copied. If the "left" sequence runs out first, then we know the remainder of the "right" sequence must already be in the correct place – so all we have to do is copy as far as we’ve written with tmpCursor back from the temporary array to the main array.

If the "right" sequence runs out first, then we can copy the rest of the "left" sequence directly into the right place (at the end of the section) and then again copy just what’s needed from the temporary array back to the main array.

This is as fast as I’ve managed to get it so far (without delving into too many of the more complicated optimizations available) – and I’m reasonably pleased with it. I have no doubt that it could be improved significantly, but I didn’t want to spend too much effort on it when I knew I’d be adapting everything for the key projection difficulty anyway.

Testing

I confess I don’t know the best way to test sorting algorithms. I have two sets of tests here:

  • A new project (MergeSortTest) where I actually implemented the sort before integrating it into OrderedEnumerable
  • All my existing OrderBy (etc) tests

The new project also acts as a sort of benchmark – although it’s pretty unscientific, and the key projection issue means the .NET implementation isn’t really comparable with the Edulinq one at the moment. Still, it’s a good indication of very roughly how well the implementation is doing. (It varies, interestingly enough… on my main laptop, it’s about 80% slower than LINQ to Objects; on my netbook it’s only about 5% slower. Odd, eh?) The new project sorts a range of sizes of input data, against a range of domain sizes (so with a small domain but a large size you’re bound to get equal elements – this helps to verify stability). The values which get sorted are actually doubles, but we only sort based on the integer part – so if the input sequence is 1.3, 3.5, 6.3, 3.1 then we should get an output sequence of 1.3, 3.5, 3.1, 6.3 – the 3.5 and 3.1 are in that order due to stability, as they compare equal under the custom comparer. (I’m performing the "integer only" part using a custom comparer, but we could equally have used OrderBy(x => (int) x)).

Conclusion

One problem (temporarily) down, one to go. I’m afraid that the code in part 26d is likely to end up being pretty messy in terms of generics – and even then I’m likely to talk about rather more options than I actually get round to coding.

Still, our simplistic model of OrderedEnumerable has served us well for the time being. Hopefully it’s proved more useful educationally this way – I suspect that if I’d dived into the final code right from the start, we’d all end up with a big headache.

Reimplementing LINQ to Objects: Part 26b – OrderBy{,Descending}/ThenBy{,Descending}

Last time we looked at IOrderedEnumerable<TElement> and I gave an implementation we could use in order to implement the public extension methods within LINQ. I’m still going to do that in this post, but it’s worth mentioning something else that’s coming up in another part (26d) – I’m going to revisit my OrderedEnumerable implementation.

There may be trouble ahead…

A comment on the previous post mentioned how my comparer executes the keySelector on each element every time it makes a comparison. I didn’t think of that as a particularly awful problem, until I thought of this sample query to rank people’s favourite colours:

var query = people.GroupBy(p => p.FavouriteColour)
                  .OrderByDescending(g => g.Count())
                  .Select(g => g.Key);

Eek. Now every time we compare two elements, we have to count everything in a group. Ironically, I believe that counting the items in a group is fast using the LINQ to Objects implementation, but not in mine – something I may fix later on. But with LINQ to Objects, this wouldn’t cause a problem in the first place!

There are ways to make this use an efficient key selector, of course – a simple Select before the OrderByDescending call would do fine… but it would be nicer if it wasn’t a problem in the first place. Basically we want to extract the keys for each element once, and then compare them repeatedly when we need to. This would also allow us to shuffle a sequence using code such as this:

Random rng = new Random(); // Or get it from elsewhere…
var shuffled = collection.OrderBy(x => rng.NextDouble());

I’m not advocating that way of shuffling, admittedly – but it would be nice if it didn’t cause significant problems, which it currently would, as the key selector is non-deterministic.

The interesting thing is that when I’ve finished today’s post, I believe the code will obey all the documented behaviour of LINQ to Objects: there’s nothing in the documentation about how often the key selector will be called. That doesn’t mean it’s a good idea to ignore this problem though, which is why I’ll revisit OrderedEnumerable later. However, that’s going to complicate the code somewhat… so while we’re still getting to grips with how everything hangs together, I’m going to stick to my inefficient implementation.

Meanwhile, back to the actual LINQ operators for the day…

What are they?

OrderBy, OrderByDescending, ThenBy and ThenByDescending all have very similar overloads:

public static IOrderedEnumerable<TSource> OrderBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)

public static IOrderedEnumerable<TSource> OrderBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)

public static IOrderedEnumerable<TSource> OrderByDescending<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)

public static IOrderedEnumerable<TSource> OrderByDescending<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)

public static IOrderedEnumerable<TSource> ThenBy<TSource, TKey>(
    this IOrderedEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)

public static IOrderedEnumerable<TSource> ThenBy<TSource, TKey>(
    this IOrderedEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)

public static IOrderedEnumerable<TSource> ThenByDescending<TSource, TKey>(
    this IOrderedEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)

public static IOrderedEnumerable<TSource> ThenByDescending<TSource, TKey>(
    this IOrderedEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)

They’re all extension methods, but ThenBy/ThenByDescending are extension methods on IOrderedEnumerable<T> instead of IEnumerable<T>.

We’ve already talked about what they do to some extent – each of them returns a sequence which is ordered according to the specified key. However, in terms of details:

  • The source and keySelector parameters can’t be null, and are validated eagerly.
  • The comparer parameter (where provided) can be null, in which case the default comparer for the key type is used.
  • They use deferred execution – the input sequence isn’t read until it has to be.
  • They read and buffer the entire input sequence when the result is iterated. Or rather, they buffer the original input sequence – as I mentioned last time, when a compound ordered sequence (source.OrderBy(…).ThenBy(…).ThenBy(…)) is evaluated, the final query will go straight to the source used for OrderBy, rather than sorting separately for each key.

What are we going to test?

I have tests for the following:

  • Deferred execution (using ThrowingEnumerable)
  • Argument validation
  • Ordering stability
  • Simple comparisons
  • Custom comparers
  • Null comparers
  • Ordering of null keys

In all of the tests which don’t go bang, I’m using an anonymous type as the source, with integer "Value" and "Key" properties. I’m ordering using the key, and then selecting the value – like this:

[Test]
public void OrderingIsStable()
{
    var source = new[]
    {
        new { Value = 1, Key = 10 },
        new { Value = 2, Key = 11 },
        new { Value = 3, Key = 11 },
        new { Value = 4, Key = 10 },
    };
    var query = source.OrderBy(x => x.Key)
                      .Select(x => x.Value);
    query.AssertSequenceEqual(1, 4, 2, 3);
}

For ThenBy/ThenByDescending I have multiple key properties so I can test the interaction between the primary and secondary orderings. For custom key comparer tests, I have an AbsoluteValueComparer which simply compares the absolute values of the integers provided.

The "Value" property is always presented in ascending order (from 1) to make it easier to keep track of, and the "Key" properties are always significantly larger so we can’t get confused between the two. I originally used strings for the keys in all tests, but then I found out that the default string comparer was culture-sensitive and didn’t behave how I expected it to. (The default string equality comparer uses ordinal comparisons, which are rather less brittle…) I still use strings for the keys in nullity tests, but there I’m specifying the ordinal comparer.

I wouldn’t claim the tests are exhaustive – by the time you’ve considered multiple orderings with possibly equal keys, different comparers etc the possibilities are overwhelming. I’m reasonably confident though (particularly after the tests found some embarrassing bugs in the implementation). I don’t think they’re hugely readable either – but I was very keen to keep the value separated from the key, rather than just ordering by "x => x" in tests. If anyone fancies cloning the repository and writing better tests, I’d be happy to merge them :)

What I deliberately don’t have yet is a test for how many times the key selector is executed: I’ll add one before post 26d, so I can prove we’re doing the right thing eventually.

Let’s implement them!

We’ve got two bits of implementation to do before we can run the tests:

  • The extension methods
  • The GetEnumerator() method of OrderedEnumerable

The extension methods are extremely easy. All of the overloads without comparers simply delegate to the ones with comparers (using Comparer<TKey>.Default) and the remaining methods look like this:

public static IOrderedEnumerable<TSource> OrderBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (keySelector == null)
    {
        throw new ArgumentNullException("keySelector");
    }
    return new OrderedEnumerable<TSource>(source,
        new ProjectionComparer<TSource, TKey>(keySelector, comparer));
}

public static IOrderedEnumerable<TSource> OrderByDescending<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (keySelector == null)
    {
        throw new ArgumentNullException("keySelector");
    }
    IComparer<TSource> sourceComparer = new ProjectionComparer<TSource, TKey>(keySelector, comparer);
    sourceComparer = new ReverseComparer<TSource>(sourceComparer);
    return new OrderedEnumerable<TSource>(source, sourceComparer);
}

public static IOrderedEnumerable<TSource> ThenBy<TSource, TKey>(
    this IOrderedEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    if (keySelector == null)
    {
        throw new ArgumentNullException("keySelector");
    }
    return source.CreateOrderedEnumerable(keySelector, comparer, false);
}

(To get ThenByDescending, just change the name of the method and change the last argument of CreateOrderedEnumerable to true.)

All very easy. I’m pretty sure I’m going to want to change the OrderedEnumerable constructor to accept the key selector and key comparer in the future (in 26d), which will make the above code even simpler. That can wait a bit though.

Now for the sorting part in OrderedEnumerable. Remember that we need a stable sort, so we can’t just delegate to List<T>.Sort – at least, not without a bit of extra fiddling. (We could project to a type which contained the index, and add that onto the end of the comparer as a final tie-breaker.)

For the minute – and I swear it won’t stay like this – here’s the horribly inefficient (but easy to understand) implementation I’ve got:

public IEnumerator<TElement> GetEnumerator()
{
    // This is a truly sucky way of implementing it. It’s the simplest I could think of to start with.
    // We’ll come back to it!
    List<TElement> elements = source.ToList();
    while (elements.Count > 0)
    {
        TElement minElement = elements[0];
        int minIndex = 0;
        for (int i = 1; i < elements.Count; i++)
        {
            if (currentComparer.Compare(elements[i], minElement) < 0)
            {
                minElement = elements[i];
                minIndex = i;
            }
        }
        elements.RemoveAt(minIndex);
        yield return minElement;
    }
}

We simply copy the input to a list (which is something we may well do in the final implementation – we certainly need to suck it all in somehow) and then repeatedly find the minimum element (favouring earlier elements over later ones, in order to achieve stability), removing them as we go. It’s an O(n2) approach, but hey – we’re going for correctness first.

Conclusion

This morning, I was pretty confident this would be an easy and quick post to write. Since then, I’ve been found pain in the following items:

  • Calling key selectors only once per element is more important than it might sound at first blush
  • The default sort order for string isn’t what I’d have guessed
  • My (committed!) extension methods were broken, because I hadn’t edited them properly after a cut and paste
  • Writing tests for situations where there are lots of combinations is irritating

So far these have only extended my estimated number of posts for this group of operators to 4 (26a-26d) but who knows what the next few days will bring…