Reimplementing LINQ to Objects: Part 42 – More optimization

A few parts ago, I jotted down a few thoughts on optimization. Three more topics on that general theme have occurred to me, one of them prompted by the comments.

User-directed optimizations

I mentioned last time that for micro-optimization purposes, we could derive a tiny benefit if there were operators which allowed us to turn off potential optimizations – effectively declare in the LINQ query that we believed the input sequence would never be an IList<T> or an ICollection<T>, so it wasn’t worth checking it. I still believe that level of optimization would be futile.

However, going the other way is entirely possible. Imagine if we could say, "There are probably a lot of items in this collection, and the operations I want to perform on them are independent and thread-safe. Feel free to parallelize them."

That’s exactly what Parallel LINQ gives you, of course. A simple call to AsParallel() somewhere in the query – often at the start, but it doesn’t have to be – enables parallelism. You need to be careful how you use this, of course, which is why it’s opt-in… and it gives you a fair amount of control in terms of degrees of potential parallelism, whether the results are required in the original order and so on.

In some ways my "TopBy" proposal is similar in a very small way, in that it gives information relatively early in the query, allowing the subsequent parts (ThenBy clauses) to take account of the extra information provided by the user. On the other hand, the effect is extremely localized – basically just for the sequence of clauses to do with ordering.

Related to the idea of parallelism is the idea of side-effects, and how they affect LINQ to Objects itself.

Side-effects and optimization

The optimizations in LINQ to Objects appear to make some assumptions about side-effects:

  • Iterating over a collection won’t cause any side-effects
  • Predicates may cause side-effects

Without the first point, all kinds of optimizations would effectively be inappropriate. As the simplest example, Count() won’t use an iterator – it will just take the count of the collection. What if this was an odd collection which mutated something during iteration, though? Or what if accessing the Count property itself had side-effects? At that point we’d be violating our principle of not changing observable behaviour by optimizing. Again, the optimizations are basically assuming "sensible" behaviour from collections.

There’s a rather more subtle possible cause of side-effects which I’ve never seen discussed. In some situations – most obviously Skip – an operator can be implemented to move over an iterator for a time without taking each "current" value. This is due to the separation of MoveNext() from Current. What if we were dealing with an iterator which had side-effects only when Current was fetched? It would be easy to write such a sequence – but again, I suspect there’s an implicit assumption that such sequences simply don’t exist, or that it’s reasonable for the behaviour of LINQ operators with respect to them to be left unspecified.

Predicates, on the other hand, might not be so sensible. Suppose we were computing "sequence.Last(x => 10 / x > 1)" on the sequence { 5, 0, 2 }. Iterating over the sequence forwards, we end up with a DivideByZeroException – whereas if we detected that the sequence was a list, and worked our way backwards from the end, we’d see that 10 / 2 > 1, and return that last element (2) immediately. Of course, exceptions aren’t the only kind of side-effect that a predicate can have: it could mutate other state. However, it’s generally easier to spot that and cry foul of it being a proper functional predicate than notice the possibility of an exception.

I believe this is the reason the predicated Last overload isn’t optimized. It would be nice if these assumptions were documented, however.

Assumptions about performance

There’s a final set of assumptions which the common ICollection<T>/IList<T> optimizations have all been making: that using the more "direct" members of the interfaces (specifically Count and the indexer) are more efficient than simply iterating. The interfaces make no such declarations: there’s no requirement that Count has to be O(1), for example. Indeed, it’s not even the case in the BCL. The first time you ask a "view between" on a sorted set for its count after the underlying set has changed, it has to count the elements again.

I’ve had this problem before, removing items from a HashSet in Java. The problem is that there’s no way of communicating this information in a standardized way. We could use attributes for everything, but it gets very complicated, and I strongly suspect it would be a complete pain to use. Basically, performance is one area where abstractions just don’t hold up – or rather, the abstractions aren’t designed to include performance characteristics.

Even if we knew the complexity of (say) Count that still wouldn’t help us necessarily. Suppose it’s an O(n) operation – that sounds bad, until you discover that for this particular horrible collection, each iteration step is also O(n) for some reason. Or maybe there’s a collection with an O(1) count but a horrible constant value, whereas iterating is really quick per item… so for small values of O(n), iteration would be faster. Then you’ve got to bear in mind how much processor time is needed trying to work out the fastest approach… it’s all bonkers.

So instead we make these assumptions, and for the most part they’re correct. Just be aware of their presence.

Conclusion

I have reached the conclusion that I’m tired, and need sleep. I might write about Queryable, IQueryable and query expressions next time.

Reimplementing LINQ to Objects: Part 41 – How query expressions work

Okay, first a quick plug. This won’t be in as much detail as chapter 11 of C# in Depth. If you want more, buy a copy. (Until Feb 1st, there’s 43% off it if you buy it from Manning with coupon code j2543.) Admittedly that chapter has to also explain all the basic operators rather than just query expression translations, but that’s a fair chunk of it.

If you’re already familiar with query expressions, don’t expect to discover anything particularly insightful here. However, you might be interested in the cheat sheet at the end, in case you forget some of the syntax occasionally. (I know I do.)

What is this "query expression" of which you speak?

Query expressions are a little nugget of goodness hidden away in section 7.16 of the C# specification. Unlike some language features like generics and dynamic typing, query expressions keep themselves to themselves, and don’t impinge on the rest of the spec. A query expression is a bit of C# which looks a bit like a mangled version of SQL. For example:

from person in people
where person.FirstName.StartsWith("J")
orderby person.Age
select person.LastName

It looks somewhat unlike the rest of C#, which is both a blessing and a curse. On the one hand, queries stand out so it’s easy to see they’re queries. On the other hand… they stand out rather than fitting in with the rest of your code. To be honest I haven’t found this to be an issue, but it can take a little getting used to.

Every query expression can be represented in C# code, but the reverse isn’t true. Query expressions only take in a subset of the standard query operators – and only a limited set of the overloads, at that. It’s not unusual to see a query expression followed by a "normal" query operator call, for example:

var list = (from person in people
            where person.FirstName.StartsWith("J")
            orderby person.Age
            select person.LastName)
           .ToList();

So, that’s very roughly what they look like. That’s the sort of thing I’m dealing with in this post. Let’s start dissecting them.

Compiler translations

First it’s worth introducing the general principle of query expressions: they effectively get translated step by step into C# which eventually doesn’t contain any query expressions. To stick with our first example, that ends up be translated into this code:

people.Where(person => person.FirstName.StartsWith("J"))
      .OrderBy(person => person.Age)
      .Select(person => person.LastName)

It’s important to understand that the compiler hasn’t done anything apart from systematic translation to get to this point. In particular, so far we haven’t depended on what "people" is, nor "Where", "OrderBy" or "Select".

Can you tell what this code does yet? You can probably hazard a pretty good guess, but you can’t tell. Is it going to call Edulinq.Enumerable.Select, or System.Linq.Enumerable.Select, or something entirely different? It depends on the context. Heck, "people" could be the name of a type which has a static Where method. Or maybe it could be a reference to a class which has an instance method called Where… the options are open.

Of course, they don’t stay open for long: the compiler takes that expression and compiles it applying all the normal rules. It converts the lambda expression into either a delegate or an expression tree, tries to resolve Where, OrderBy and Select as normal, and life continues. (Don’t worry if you’re not sure about expression trees yet – I’ll come to them in another post.)

The important point is that the query expression translations don’t know about System.Linq. The spec barely mentioned IEnumerable<T>, and certainly doesn’t rely on it. The whole thing is pattern based. If you have an API which provides some or all of the operators used by the pattern, in an appropriate way, you can use query expressions with it. That’s the secret sauce that allows you to use the same syntax for LINQ to Objects, LINQ to SQL, LINQ to Entities, Reactive Extensions, Parallel Extensions and more.

Range variables and the initial "from" clause

The first part of the query to look at is the first "from" clause, at the start of the query. It’s worth mentioning upfront that this is handled somewhat differently to any later "from" clauses – I’ll explain how they’re translated later.

So we have an expression of the form:

from [type] identifier in expression

The "expression" part is just any expression. In most cases there isn’t a type specified, in which case the translated version is simply the expression, but with the compiler remembering the identifier as a range variable. I’ll do my best to explain what range variables are in a minute :)

If there is a type specified, that represents a call to Cast<type>(). So examples of the two translations so far are:

// Query (incomplete)
from x in people

// Translation (+ range variable of "x")
people

// Query (incomplete)
from Person x in people

// Translation (+ range variable of "x")
(people).Cast<Person>()

These aren’t complete query expressions – queries have very precise rules about how they can start and end. They always start with a "from" clause like this, and always end either with a "group by" clause or a "select" clause.

So what’s the point of the range variable? Well, that’s what gets used as the name of the lambda expression parameter used in all the later clauses. Let’s add a select clause to create a complete expression and demonstrate how the variable could be used.

A "select" clause

A select clause is usually translated into a call to Select, using the "body" of the clause as the body of the lambda expression… and the range variable as the parameter. So to expand our previous query, we might have this translation:

// Query
from x in people
select x.Name

// Translation
people.Select(x => x.Name)

That’s all that range variables are used for: to provide placeholders within lambda expressions, effectively. They’re quite unlike normal variables in most senses. It only makes sense to talk about the "value" of a range variable within a particular clause at a particular point in time when the clause is executing, for one particular value. Their nearest conceptual neighbour is probably the iteration variable declared in a foreach statement, but even that’s not really the same – particularly given the way iteration variables are captured, often to the surprise of developers.

The body part has to be a single expression – you can’t use "statement lambdas" in query expressions. For example, there’s no query expression which would translate to this:

// Can’t express this in a query expression
people.Select(x => { 
                     Console.WriteLine("Got " + x);
                     return x.Name;
                   })

That’s a perfectly valid C# expression, it’s just there’s now way of expressing it directly as a query expression.

I mentioned that a select clause usually translates into a Select call. There are two cases where it doesn’t:

  • If it’s the sole clause after a secondary "from" clause, or a "group by", "join" or "join … into" clause, the body is used in the translation of that clause
  • If it’s an "identity" projection coming after another clause, it’s removed entirely.

I’ll deal with the first point when we reach the relevant clauses. The second point leads to these translations:

// Query
from x in people
where x.IsAdult
select x

// Translation: Select is removed
people.Where(x => x.IsAdult)

// Query
from x in people
select x

// Translation: Select is *not* removed
people.Select(x => x)

The point of including the "pointless" select in the second translation is to hide the original source sequence; it’s assumed that there’s no need to do this in the first translation as the "Where" call will already have protected the source sufficiently.

The "where" clause

This one’s really simple – especially as we’ve already seen it! A where clause always just translates into a Where call. Sample translation, this time with no funny business removing degenerate query expressions:

// Query
from x in people
where x.IsAdult
select x.Name

// Translation
people.Where(x => x.IsAdult)
      .Select(x => x.Name)

Note how the range variable is propagated through the query.

The "orderby" clause

Here’s a secret: I can never remember offhand whether it’s "orderby" or "order by" – it’s confusing because it really is "group by", but "orderby" is actually just a single word. Of course, Visual Studio gives a pretty unsubtle hint in terms of colouring.

In the simplest form, an orderby clause might look like this:

// Query
from x in people
orderby x.Age
select x.Name

// Translation
people.OrderBy(x => x.Age)
      .Select(x => x.Name)

There are two things which can add complexity though:

  • You can order by multiple expressions, separating them by commas
  • Each expression can be ordered ascending implicitly, ascending explicitly or descending explicitly.

The first sort expression is always translated into OrderBy or OrderByDescending; subsequent ones always become ThenBy or ThenByDescending. It makes no difference whether you explicitly specify "ascending" or not – I’ve very rarely seen it in real queries. Here’s an example putting it all together:

// Query
from x in people
orderby x.Age, x.FirstName descending, x.LastName ascending
select x.LastName

// Translation
people.OrderBy(x => x.Age)
      .ThenByDescending(x => x.FirstName)
      .ThenBy(x => x.LastName)
      .Select(x => x.LastName)

Top tip: don’t use multiple "orderby" clauses consecutively. This query is almost certainly not what you want:

// Don’t do this!
from x in people 
orderby x.Age
orderby x.FirstName
select x.LastName

That will end up sorting by FirstName and then Age, and doing so rather slowly as it has to sort twice.

The "group by" clause

Grouping is another alternative to "select" as the final clause in a query. There are two expressions involved: the element selector (what you want to get in each group) and the key selector (how you want the groups to be organized). Unsurprisingly, this uses the GroupBy operator. So you might have a query to group people in families by their last name, with each group containing the first names of the family members:

// Query expression
from x in people 
group x.FirstName by x.LastName

// Translation
people.GroupBy(x => x.LastName, x => x.FirstName)

If the element selector is trivial, it isn’t specified as part of the translation:

// Query expression
from x in people 
group x by x.LastName

// Translation
people.GroupBy(x => x.LastName)

Query continuations

Both "select" and "group by" can be followed by "into identifier". This is known as a query continuation, and it’s really simple. Its translation in the specification isn’t in terms of a method call, but instead it transforms one query expression into another, effectively nesting one query as the source of another. I find that translation tricky to think about, personally… I prefer to think of it as using a temporary variable, like this:

// Original query
var query = from x in people
            select x.Name into y
            orderby y.Length
            select y[0];

// Query continuation translation
var tmp = from x in people
          select x.Name;

var query = from y in tmp
            orderby y.Length
            select y[0];

// Final translation into methods
var query = people.Select(x => x.Name)
                  .OrderBy(y => y.Length)
                  .Select(y => y[0]);

Obviously that final translation could have been expressed in terms of two statements as well… they’d be equivalent. This is why it’s important that LINQ uses deferred execution – you can split up a query as much as you like, and it won’t alter the execution flow. The query wouldn’t actually execute when the value is assigned to "tmp" – it’s just preparing the query for execution.

Transparent identifiers and the "let" clause

The rest of the query expression clauses all introduce an extra range variable in some form or other. This is the part of query expression translation which is hardest to understand, because it affects how any usage of the range variable in the query expression is translated.

We’ll start with probably the simplest of the remaining clauses: the "let" clause. This simply introduces a new range variable based upon a projection. It’s a bit like a "select", but after a "let" clause both the original range variable and the new one are in scope for the rest of the query. They’re typically used to avoid redundant computations, or simply to make the code simpler to read. For example, suppose computing an employee’s tax is a complicated operation, and we want to display a list of employees and the tax they pay, with the higher tax-payer first:

from x in employees
let tax = x.ComputeTax()
orderby tax descending
select x.LastName + ": " + tax

That’s pretty readable, and we’ve managed to avoid computing the tax twice (once for sorting and once for display).

The problem is, both "x" and "tax" are in scope at the same time… so what are we going to pass to the Select method at the end? We need one entity to pass through our query, which knows the value of both "x" and "tax" at any point (after the "let" clause, obviously). This is precisely the point of a transparent identifier. You can think of the above query as being translated into this:

// Translation from "let" clause to another query expression
from x in employees
select new { x, tax = x.ComputeTax() } into z
orderby z.tax descending
select z.x.LastName + ": " + z.tax

// Final translated query
employees.Select(x => new { x, tax = x.ComputeTax() })
         .OrderByDescending(z => z.tax)
         .Select(z => z.x.LastName + ": " + z.tax)

Here "z" is the transparent identifier – which I’ve made somewhat more opaque by giving it a name. In the specification, the query translations are performed in terms of "*" – which clearly isn’t a valid identifier, but which stands in for the transparent one.

The good news about transparent identifiers is that most of the time you don’t need to think of them at all. They simply let you have multiple range variables in scope at the same time. I find myself only bothering to think about them explicitly when I’m trying to work out the full translation of a query expression which uses them. It’s worth knowing about them to avoid being stumped by the concept of (say) a select clause being able to use multiple range variables, but that’s all.

Now that we’ve got the basic concept, we can move onto the final few clauses.

Secondary "from" clauses

We’ve seen that the introductory "from" clause isn’t actually translated into a method call, but any subsequent ones are. The syntax is still the same, but the translation uses SelectMany. In many cases this is used just like a cross-join (Cartesian product) but it’s more flexible than that, as the "inner" sequence introduced by the secondary "from" clause can depend on the current value from the "outer" sequence. Here’s an example of that. with the call to SelectMany in the translation:

// Query expression
from parent in adults
from child in parent.Children
where child.Gender == Gender.Male
select child.Name + " is a son of " + parent.Name

// Translation (using z for the transparent identifier)
adults.SelectMany(parent => parent.Children,
                  (parent, child) => new { parent, child })
      .Where(z => z.child.Gender == Gender.Male)
      .Select(z => z.child.Name + " is a son of " + z.parent.Name;

Again we can see the effect of the transparent identifier – an anonymous type is introduced to propagate the { parent, child } tuple through the rest of the query.

There’s a special case, however – if "the rest of the query" is just a "select" clause, we don’t need the anonymous type. We can just apply the projection directly in the SelectMany call. Here’s a similar example, but this time without the "where" clause:

// Query expression
from parent in adults
from child in parent.Children
select child.Name + " is a child of " + parent.Name

// Translation (using z for the transparent identifier)
adults.SelectMany(parent => parent.Children,
                  (parent, child) => child.Name + " is a child of " + parent.Name)

This same trick is used in GroupJoin and Join, but I won’t go into the details there. It’s simpler to just provide examples which use the shortcut, instead of including unnecessary extra clauses just to force the transparent identifier to appear in the translation.

Note that just like the introductory "from" clause, you can specify a type for the range variable, which forces a call to "Cast<>".

Simple "join" clauses (no "into")

A "join" clause without an "into" part corresponds to a call to the Join method, which represents an inner equijoin. In some ways this is like an extra "from" clause with a "where" clause to provide the relevant filtering, but there’s a significant difference: while the "from" clause (and SelectMany) allow you to project each element in the outer sequence to an inner sequence, in Join you merely provide the inner sequence directly, once. You also have to specify the two key selectors – one for the outer sequence, and one for the inner sequence. The general syntax is:

join identifier in inner-sequence on outer-key-selector equals inner-key-selector

The identifier names the extra range variable introduced. Here’s an example including the translation:

// Query expression
from customer in customers
join order in orders on customer.Id equals order.CustomerId
select customer.Name + ": " + order.Price

// Translation
customers.Join(orders,
               customer => customer.Id,
               order => order.CustomerId,
               (customer, order) => customer.Name + ": " + order.Price)

Note how if you put the key selectors the wrong way round, it’s highly unlikely that the result will compile – the lambda expression for the outer sequence doesn’t "know about" the inner sequence element, and vice versa. The C# compiler is even nice enough to guess the probable cause, and suggest the fix.

Group joins – "join … into"

Group joins look exactly the same as inner joins, except they have an extra "into identifier" part at the end. Again, this introduces an extra range variable – but it’s the identifier after the "into" which ends up in scope, not the one after "join"; that one is only used in the key selector. This is easier to see when we look at a sample translation:

// Query expression
from customer in customers
join order in orders on customer.Id equals order.CustomerId into customerOrders
select customer.Name + ": " + customerOrders.Count()

// Translation
customers.GroupJoin(orders,
                    customer => customer.Id,
                    order => order.CustomerId,
                    (customer, customerOrders) => customer.Name + ": " + customerOrders.Count())

If we had tried to refer to "order" in the select clause, the result would have been an error: it’s not in scope any more. Note that this is not a query continuation unlike "select … into" and "group … into". It introduces a new range variable, but all the previous range variables are still in scope.

That’s it! That’s all the translations that the C# compiler supports. VB’s query expressions are rather richer – but I suspect that’s at least partly because it’s more painful to write the "dot notation" syntax in VB, as the lambda expression syntax isn’t as nice as C#’s.

Translation cheat sheet

I thought it would be useful to produce a short table of the kinds of clauses supported in query expressions, with the translation used by the C# compiler. The translation is given assuming a single range variable named "x" is in scope. I haven’t given the alternative options where transparent identifiers are introduced – this table isn’t meant to be a replacement for all the information above! (Likewise this doesn’t mention the optimizations for degenerate query expressions or "identity projection" groupings.)

Query expression clause Translation
First "from [type] x in sequence" Just "sequence" or "sequence.Cast<type>()", but with the introduction of a range variable
Subsequent "from" clauses:
"from [type] y in projection"
SelectMany(x => projection, (x, y) => new { x, y })
or SelectMany(x => projection.Cast<type>(), (x, y) => new { x, y })
where predicate Where(x => predicate)
select projection Select(x => projection)
let y = projection Select(x => new { x, y = projection })
orderby o1, o2 ascending, o3 descending
(Each ordering may have descending or ascending specified explicitly; the default is ascending)
OrderBy(x => o1)
.ThenBy(x => o2)
.ThenByDescending(x => o3)
group projection by key-selector GroupBy(x => key-selector, x => projection)
join y in inner-sequece
on outer-key-selector equals inner-key-selector
Join(x => outer-key-selector,
    y => inner-key-selector,
    (x, y) => new { x, y })
join y in inner-sequece
on outer-key-selector equals inner-key-selector
into z
GroupJoin(x => outer-key-selector,
    y => inner-key-selector,
    (x, z) => new { x, z })
query1 into y
query2
(Translation in terms of a new query expression)
from y in (query1)
query2

Conclusion

Hopefully that’s made a certain amount of sense out of a fairly complicated topic. I find it’s one of those "aha!" things – at some point it clicks, and then seems reasonably simple (aside from transparent identifiers, perhaps). Until that time, query expressions can be a bit magical.

As an aside, I have a sneaking suspicion that one of my first blog posts consisted of my initial impressions of LINQ, written in a plane on the way to the MVP conference in Seattle in September 2005. I would check, but I’m finishing this post in another plane, this time on the way to San Francisco. I think I’d have been somewhat surprised to be told in 2005 that I’d still be writing blog posts about LINQ over five years later. Mind you, I can think of any number of things which have happened in the intervening years which would have astonished me to about the same degree.

Next time: some more thoughts on optimization. Oh, and I’m likely to update my wishlist of extra operators as well, but within the existing post.

Reimplementing LINQ to Objects: Part 40 – Optimization

I’m not an expert in optimization, and most importantly I don’t have any real-world benchmarks to support this post, so please take it with a pinch of salt. That said, let’s dive into what optimizations are available in LINQ to Objects.

What do we mean by optimization?

Just as we think of refactoring as changing the internal structure of code without changing its externally visible behaviour, optimization is the art/craft/science/voodoo of changing the performance of code without changing its externally visible behaviour. Sort of.

This requires two definitions: "performance" and "externally visible behaviour". Neither are as simple as they sound. In almost all cases, performance is a trade-off, whether in speed vs memory, throughput vs latency, big-O complexity vs the factors within that complexity bound, and best case vs worst case.

In LINQ to Objects the "best case vs worst case" is the balance which we need to consider most often: in the cases where we can make a saving, how significant is that saving? How much does it cost in every case to make a saving some of the time? How often do we actually take the fast path?

Externally visible behaviour is even harder to pin down, because we definitely don’t mean all externally visible behaviour. Almost all the optimizations in Edulinq are visible if you work hard enough – indeed, that’s how I have unit tests for them. I can test that Count() uses the Count property of an ICollection<T> instead of iterating over it by creating an ICollection<T> implementation which works with Count but throws an exception if you try to iterate over it. I don’t think we care about that sort of change to externally visible behaviour.

What we really mean is, "If we use the code in a sensible way, will we get the same results with the optimized code as we would without the optimization?" Much better. Nothing woolly about the term "sensible" at all, is there? More realistically, we could talk about a system where every type adheres to the contracts of every interface it implements – that would at least get rid of the examples used for unit testing. Still, even the performance of a system is externally visible… it’s easy to tell the difference between an implementation of Count() which is optimized and one which isn’t, if you’ve got a list of 10 million items.

How can we optimize in LINQ to Objects?

Effectively we have one technique for significant optimization in LINQ to Objects: finding out that a sequence implements a more capable interface than IEnumerable<T>, and then using that interface. (Or in the case of my optimization for HashSet<T> and Contains, using a concrete type in the same way.) Count() is the most obvious example of this: if the sequence implements ICollection or ICollection<T>, then we can use the Count property on that interface and we’re done. No need to iterate at all.

These are generally good optimizations because they allow us to transform an O(n) computation into an O(1) computation. That’s a pretty big win when it’s applicable, and the cost of checking is reasonably small. So long as we hit the right type once every so often – and particularly if in those cases the sequences are long – then it’s a net gain. The performance characteristics are likely to be reasonably fixed here for any one program – it’s not like we’ll sometimes win and sometimes lose for a specific query… it’s that for some queries we win and some we won’t. If we were micro-optimizing, we might want a way of calling a "non-optimized" version which didn’t even bother trying the optimization, because we know it will always fail. I would regard such an approach as a colossal waste of effort in the majority of cases.

Some optimizations are slightly less obvious, particularly because they don’t offer a change in the big-O complexity, but can still make a significant difference. Take ToArray, for example. If we know the sequence is an IList<T> we can construct an array of exactly the right size and ask the list to copy the elements into it. Chances are that copy can be very efficient indeed, basically copying a whole block of bits from one place to another – and we know we won’t need any resizing. Compare that with building up a buffer, resizing periodically including copying all the elements we’ve already discovered. Every part of that process is going to be slower, but they’re both O(n) operations really. This is a good example of where big-O notation doesn’t tell the whole story. Again, the optimization is almost certainly a good one to make.

Then there are distinctly dodgy optimizations which can make a difference, but are unlikely to apply. My optimization for ElementAt and ElementAtOrDefault comes into play here. It’s fine to check whether an object implements IList<T>, and use the indexer if so. That’s an obvious win. But I have an extra optimization to exit quickly if we can find out that the given index is out of the bounds of the sequence. Unfortunately that optimization is only useful when:

  • The sequence implements ICollection<T> or ICollection (but remember it has to implement IEnumerable<T> – there aren’t many collections implementing only the non-generic ICollection, but the generic IEnumerable<T>)
  • The sequence doesn’t implement IList<T> (which gets rid of almost all implementations of ICollection<T>)
  • The given index is actually greater than or equal to the size of the collection

All that comes at the cost of a couple of type checks… not a great cost, and we do potentially save an O(n) check for being given an index out of the bounds of the collection… but how often are we really going to make that win? This is where I’d love to have something like Dapper, but applied to LINQ to Objects and running in a significant number of real-world projects, just logging in as light a way as possible how often we win, how often we lose, and how big the benefit is.

Finally, we come to the optimizations which don’t make sense to me… such as the optimization for First in both Mono and LinqBridge. Both of these projects check whether the sequence is a list, so that they check the count and then use the indexer to fetch item 0 instead of calling GetEnumerator()/MoveNext()/Current. Now yes, there’s a chance this avoids creating an extra object (although not always, as we’ve seen before) – but they’re both O(1) operations which are likely to be darned fast. At this point not only is the payback very small (if it even exists) but the whole operation is likely to be so fast that the tiny check for whether the object implements IList<T> is likely to become more significant. Oh, and then there’s the extra code complexity – yes, that’s only relevant to the implementers, but I’d personally rather they spent their time on other things (like getting OrderByDescending to work properly… smirk). In other words, I think this is a bad target for optimization. At some point I’ll try to do a quick analysis of just how often the collection has to implement IList<T> in order for it to be worth doing this – and whether the improvement is even measurable.

Of course there are other micro-optimizations available. When we don’t need to fetch the current item (e.g. when skipping over items) let’s just call MoveNext() instead of also assigning the return value of a property to a variable. I’ve done that in various places in Edulinq, but not as an optimization strategy, which I suspect won’t make a significant difference, but for readability – to make it clearer to the reader that we’re just moving along the iterator, not examining the contents as we go.

The only other piece of optimization I think I’ve performed in Edulinq is the "yield the first results before sorting the rest" part of my quicksort implementation. I’m reasonably proud of that, at least conceptually. I don’t think it really fits into any other bucket – it’s just a matter of thinking about what we really need and when, deferring work just in case we never need to do it.

What can we not optimize in LINQ to Objects?

I’ve found a few optimizations in both Edulinq and other implementations which I believe to be invalid.

Here’s an example I happened to look at just this morning, when reviewing the code for Skip:

var list = source as IList<TSource>;
if (list != null)
{
    count = Math.Max(count, 0);
    // Note that "count" is the count of items to skip
    for (int index = count; index < list.Count; index++)
    {
        yield return list[index];
    }
    yield break;
}

If our sequence is a list, we can just skip straight to the right part of it and yield the items one at a time. That sounds great, but what if the list changes (or is even truncated!) while we’re iterating over it? An implementation working with the simple iterator would usually throw an exception, as the change would invalidate the iterator. This is definitely a behavioural change. When I first wrote about Skip, I included this as a "possible" optimization – and actually turned it on in the Edulinq source code. I now believe it to be a mistake, and have removed it completely.

Another example is Reverse, and how it should behave. The documentation is fairly unclear, but when I ran the tests, the Mono implementation used an optimization whereby if the sequence is a list, it will just return items from the tail end using the indexer. (This has now been fixed – the Mono team is quick like that!) Again, that means that changes made to the list while iterating will be reflected in the reversed sequence. I believe the documentation for Reverse ought to be clear that:

  • Execution is deferred: the input sequence isn’t read when the method is called.
  • When the result sequence is first read by the caller, a snapshot is taken, and that’s what’s used to return the data.
  • If the result sequence is read more than once (i.e. GetEnumerator is called more than once) then a new snapshot is created each time – so changes to the input sequence between calls to GetEnumerator on the result sequence will be observed.

Now this is still not as precise as it might be in terms of what "reading" a sequence entails – in particular, a simple implementation of Reverse (as per Edulinq) will actually take the snapshot on the first call to MoveNext() on the iterator returned by GetEnumerator() – but that’s probably not too bad. The snapshotting behaviour itself is important though, and should be made explicit in my opinion.

The problem with both of these "optimizations" is arguably that they’re applying list-based optimizations within an iterator block used for deferred execution. Optimizing for lists either upfront at the point of the initial method call or within an immediate execution operator (Count, ToList etc) is fine, because we assume the sequence won’t change during the course of the method’s execution. We can’t make that assumption with an iterator block, because the flow of the code is very different: our code is visited repeatedly based on the caller’s use of MoveNext().

Sequence identity

Another aspect of behaviour which isn’t well-specified is that of identity. When is it valid for an operator to return the input sequence itself as the result sequence?

In the Microsoft implementation, this can occur in two operators: AsEnumerable (which always returns the input sequence reference, even if it’s null) and Cast (which returns the original reference only if it actually implements IEnumerable<TResult>).

In Edulinq, I have two other operators which can return the input sequence: OfType (only if the original reference implements IEnumerable<TResult> and TResult is a non-nullable value type) and Skip (if you provide a count which is zero or negative). Are these valid optimizations? Let’s think about why we might not want them to be…

If you’re returning a sequence from one layer of your code to another, you usually want that sequence to be viewed only as a sequence. In particular, if it’s backed by a List<T>, you don’t want callers casting to List<T> and modifying the list. With any operator implemented by an iterator block, that’s fine – the object returned from the operator has no accessible reference to its input, and the type itself only implements IEnumerable<T> (and IEnumerator<T>, and IDisposable, etc – but not IList<T>). It’s not so good if the operator decides it’s okay to return the original reference.

The C# language specification refers to this in the section about query expression translation: a no-op projection at the end of a query can be omitted if and only if there are other operators in the query. So a query expression of "from foo in bar select foo" will translate to "bar.Select(foo => foo)" but if we had a "where" clause in the query, the Select call would be removed. It’s worth noting that the call to "Cast" generated when you explicitly specify the type of a range variable is not enough to prevent the "no-op" projection from being generated… it’s almost as if the C# team "knows" that Cast can leak sequence identity whereas Where can’t.

Personally I think that the "hiding" of the input sequence should be guaranteed where it makes sense to do so, and explicitly not guaranteed otherwise. We could also add an operator of something like "HideIdentity" which would simply (and unconditionally) add an extra iterator block into the pipeline. That way library authors wouldn’t have to guess, and would have a clear way of expressing their intention. Using Select(x => x) or Skip(0) is not clear, and in the case of Skip it would even be pointless when using Edulinq.

As for whether my optimizations are valid – that’s up for debate, really. It seems hard to justify why leaking sequence identity would be okay for Cast but not okay for OfType, whereas I think there’s a better case for claiming that Skip should always hide sequence identity.

The Contains issue…

If you remember, I have a disagreement around what Contains should do when you don’t provide an equality comparer, and when the sequence implements ICollection<T>. I believe it should be consistent with the rest of LINQ to Objects, which always uses the default equality comparer for the element type when it needs one but the user hasn’t specified one. Everyone else (Microsoft, Mono, LinqBridge) has gone with delegating to the collection’s implementation of ICollection<T>.Contains. That plays well in terms of consistency of what happens if you call Contains on that object, so that it doesn’t matter what the compile-time type is. That’s a debate to go into in another post, but I just want to point out that this is not an example of optimization. In some cases it may be faster (notably for HashSet<T>) but it stands a very good chance of changing the behaviour. There is absolutely nothing to suggest that the equality comparer used by ICollection<T> should be the default one for the type – and in some cases it definitely isn’t.

It’s therefore a matter of what result we want to get, not how to get that result faster. It’s correctness, not optimization – but both the LinqBridge and Mono tests which fail for Edulinq are called "Contains_CollectionOptimization_ReturnsTrueWithoutEnumerating" – and I think that shows a mistaken way of thinking about this.

Can we go further?

I’ve been considering a couple of optimizations which I believe to be perfectly legitimate, but which none of the implementations I’ve seen have used. One reason I haven’t implemented them myself yet is that they will reduce the effectiveness of all my unit tests. You see, I’ve generally used Enumerable.Range as a good way of testing a non-list-based sequence… but what’s to stop Range and Repeat being implemented as IList<T> implementations?

All the non-mutating members are easy to implement, and we can just throw exceptions from the mutating members (as other read-only collections do).

Would this be more efficient? Well yes, if you ever performed a Count(), ElementAt(), ToArray(), ToList() etc operation on a range or a repeated element… but how often is that going to happen? I suspect it’s pretty rare – probably rare enough not to make it worth my time, particularly when you then consider all the tests that would have to be rewritten to use something other than Range when I wanted a non-list sequence…

Conclusion

Surprise, surprise – doing optimization well is difficult. When it’s obvious what can be done, it’s not obvious what should be done… and sometimes it’s not even what is valid in the first place.

Note that none of this has really talked about data structures and algorithms. I looked at some options when implementing ordering, and I’m still thinking about the best approach for implementing TopBy (probably either a heap or a self-balancing tree – something which could take advantage of the size being constant would be nice) – but in general the optimizations here haven’t required any cunning knowledge of computer science. That’s quite a good thing, because it’s many years since I’ve studied CS seriously…

I suspect that with this post more than almost any other, I’m likely to want to add extra items in the future (or amend mistakes which reveal my incompetence). Watch this space.

Next up, I think it would be worth revisiting query expressions from scratch. Anyone who’s read C# in Depth or has followed this blog for long enough is likely to be able to skip it, but I think the series would be incomplete without a quick dive into the compiler translations involved.

Reimplementing LINQ to Objects: Part 39 – Comparing implementations

While implementing Edulinq, I only focused on two implementations: .NET 4.0 and Edulinq. However, I was aware that there were other implementations available, notably LinqBridge and the one which comes with Mono. Obviously it’s interesting to see how other implementations behave, so I’ve now made a few changes in order to make the test code run in these different environments.

The test environments

I’m using Mono 2.8 (I can’t remember the minor version number offhand) but I tend to think of it as "Mono 3.5" or "Mono 4.0" depending on which runtime I’m using and which base libraries I’m compiling against, to correspond with the .NET versions. Both runtimes ship as part of Mono 2.8. I will use these version numbers for this post, and ask forgiveness for my lack of precision: whenever you see "Mono 3.5" please just think "Mono 2.8 running against the 2.0 runtime, possibly using some of the class libraries normally associated with .NET 3.5".

LinqBridge is a bit like Edulinq – a clean room implementation of LINQ to Objects, but built against .NET 2.0. It contains its own Func delegate declarations and its own version of ExtensionAttribute for extension methods. In my experience this makes it difficult to use with the "real" .NET 3.5, so my build targets .NET 2.0 when running against LinqBridge. This means that tests using HashSet had to be disabled. The version of LinqBridge I’m running against is 1.2 – the latest binary available on the web site. This has AsEnumerable as a plain static method rather than an extension method; the code has been fixed in source control, but I wanted to run against a prebuilt binary, so I’ve just disabled my own AsEnumerable tests for LinqBridge. Likewise the tests for Zip are disabled both for LinqBridge and the "Mono 3.5" tests as Zip was only introduced in .NET 4.

The other issue of not having .NET 4 available in the tests is that the string.Join<T>(string, IEnumerable<T>) overload is unavailable – something I’d used quite a lot in the test code. I’ve created a new static class called "StringEx" and replaced string.Join with StringEx.Join everywhere.

There are batch files under a new "testing" directory which will build and run:

  • Microsoft’s LINQ to Objects and Edulinq under .NET
  • LinqBridge, Mono 3.5’s LINQ to Objects and Edulinq under Mono 3.5
  • Mono 4.0’s LINQ to Objects and Edulinq under Mono 4.0

Although I have LinqBridge running under .NET 2.0 in Visual Studio, it’s a bit of a pain building the tests from a batch file (at least without just calling msbuild). The failures running under Mono 3.5 are the same as those running under .NET 2.0 as far as I can tell, so I’m not too worried.

Note that while I have built the Mono tests under both the 3.5 and 4.0 profiles, the results were the same other than due to generic variance, so I’ve only included the results of the 4.0 profile below.

What do the tests cover?

Don’t forget that the Edulinq tests were written in the spirit of investigation. They cover aspects of LINQ’s behaviour which are not guaranteed, both in terms of optimization and simple correctness of behaviour. I have included a test which demonstrates the "issue" with calling Contains on an ICollection<T> which uses a non-default equality comparer, as well as the known issue with OrderByDescending using a comparer which returns int.MinValue. There are optimizations which are present in Edulinq but not in LINQ to Objects, and I have tests for those, too.

The tests which fail against Microsoft’s implementation (for known reasons) are normally marked with an [Ignore] attribute to prevent them from alarming me unduly during development. NUnit categories would make more sense here, but I don’t believe ReSharper supports them, and that’s the way I run the tests normally. Likewise the tests which take a very long time (such as counting more than int.MaxValue elements) are normally suppressed.

In order to truly run all my tests, I now have a horrible hack using conditional compilation: if the ALL_TESTS preprocessor symbol is defined, I build my own IgnoreAttribute class in the Edulinq.Tests namespace, which effectively takes precedence over the NUnit one… so NUnit will ignore the [Ignore], so to speak. Frankly all this conditional compilation is pretty horrible, and I wouldn’t use it for a "real" project, but this is a slightly unusual situation.

EDIT: It turns out that ReSharper does support categories. I’m not sure how far that support goes yet, but at the very least there’s "Group by categories" available. I may go through all my tests and apply a category to each one: optimization, execution mode, time-consuming etc. We’ll see whether I can find the energy for that :)

So, let’s have a look at what the test results are…

Edulinq

Unsurprisingly, Edulinq passes all its own tests, with the minor exception of CastTest.OriginalSourceReturnedDueToGenericCovariance running under Mono 3.5, which doesn’t include covariance. Arguably this test should be conditionalised to not even run in that situation, as it’s not expected to work.

Microsoft’s LINQ to Objects

8 failures, all expected:

  • Contains delegates to the ICollection<T>.Contains implementation if it exists, rather than using the default comparer for the type. This is a design and documentation issue which I’ve discussed in more detail in the Contains part of this series.
  • Optimization: ElementAt and ElementAtOrDefault don’t validate the specified index eagerly when the input sequence implements ICollection<T> but not IList<T>.
  • Optimization: OfType always uses an intermediate iterator even when the input sequence already implements IEnumerable<T> and T is a non-nullable value type.
  • Optimization: SequenceEqual doesn’t compare the counts of the sequences eagerly even when both sequences implement ICollection<T>
  • Correctness: OrderByDescending doesn’t work if you use a key comparer which returns int.MinValue
  • Consistency: Single and SingleOrDefault (with a predicate) don’t throw InvalidOperationException as soon as they encounter a second element matching the predicate; the predicate-less overloads do throw as soon as they see a second element.

All of these have been discussed already, so I won’t go into them now.

LinqBridge

LinqBridge had a total of 33 failures. I haven’t looked into them in detail, but just going from the test output I’ve broken them down into the following broad categories:

  • Optimization:
    • Cast never returns the original source, presumably always introducing an intermediate iterator.
    • All three of Microsoft’s "missed opportunities" listed above are also missed in LinqBridge
  • Use of input sequences:
    • Except and Intersect appear to read the first sequence first (possibly completely?) and then the second sequence. Edulinq and LINQ to Objects read the second sequence completely and then stream the first sequence. This behaviour is undocumented.
    • Join, GroupBy and GroupJoin appear not to be deferred at all. If I’m right, this is a definite bug.
    • Aggregation accuracy: both Average and Sum over an IEnumerable<float> appear to use a float accumulator instead of a double. This is probably worth fixing for the sake of both range and accuracy, but isn’t specified in the documentation.
    • OrderBy (etc) appears to apply the key selector multiple times while sorting. The behaviour here isn’t documented, but as I mentioned before, it could produce performance issues unnecessarily.
  • Exceptions:
    • ToDictionary should throw an exception if you give it duplicate keys; it appears not to – at least when a custom comparer is used. (It’s possible it’s just not passing the comparer along.)
    • The generic Max and Min methods don’t return the null value for the element type when that type is nullable. Instead, they throw an exception – which is the normal behaviour if the element type is non-nullable. This behaviour isn’t well documented, but is consistent with the behaviour of the non-generic overloads. See the Min/Max post for more details.
  • General bugs:
    • The generic form of Min/Max appears not to ignore null values when the element type is nullable.
    • OrderByDescending appears to be broken in the same way as Microsoft’s implementation
    • Range appears to be broken around its boundary testing.
    • Join, GroupJoin, GroupBy and ToLookup break when presented with null keys

Mono 4.0 (and 3.5, effectively)

Mono failed 18 of the tests. There are fewer definite bugs than in LinqBridge, but it’s definitely not perfect. Here’s the breakdown:

  • Optimization:
    • Mono misses the same three opportunities that LinqBridge and Microsoft miss.
    • Contains(item) delegates to ICollection<T> when it’s implemented, just like in the Microsoft implementation. (I assume the authors would call this an "optimization", hence its location in this section.) I believe that LinqBridge has the same behaviour, but that test didn’t run in the LinqBridge configuration as it uses HashSet.
  • Average/Sum accumulator types:
    • Mono appears to use float when working with float values, leading to more accumulator error than is necessary.
  • Average overflow for integer types
    • Mono appears to use checked arithmetic when summing a sequence, but not when taking the average of a sequence. So the average of { long.MaxValue, long.MaxValue, 2 } is 0. (This originally confused me into thinking it was using floating point types during the summation, but I now believe it’s just a checked/unchecked issue.)
  • Bugs:
    • Count doesn’t overflow either with or without a predicate
    • The Max handling of double.NaN isn’t in line with .NET. I haven’t investigated the reason for this yet.
    • OrderByDescending is broken in the same way as for LinqBridge and the Microsoft implementation.
    • Range is broken for both Range(int.MinValue, 0) and Range(int.MaxValue, 1). Test those boundary cases, folks :)
    • When reversing a list, Mono doesn’t buffer the current contents. In other words, changes made while iterating over the reversed list are visible in the returned sequence. The documentation isn’t very clear about the desired behaviour here, admittedly.
    • GroupJoin and Join match null keys, unlike Microsoft’s implementation.

How does Edulinq fare against other unit tests?

It didn’t seem fair to only test other implementations against the Edulinq tests. After all, it’s only natural that my tests should work against my own code. What happens if we run the Mono and LinqBridge tests against my code?

The LinqBridge tests didn’t find anything surprising. There were two failures:

  • I don’t have the "delegate Contains to ICollection<T>.Contains" behaviour, which the tests check for.
  • I don’t optimize First in the case of the collection implementing IList<T>. I view this as a pretty dubious optimization to be honest – I doubt that creating an iterator to get to the first item is going to be much slower than checking for IList<T>, fetching the count, and then fetching the first item via the indexer… and it means that all non-list implementations also have to check whether the sequence implements IList<T>. I don’t intend to change Edulinq for this.

The Mono tests picked up the same two failures as above, and two genuine bugs:

  • By implementing Take via TakeWhile, I was iterating too far: in order for the condition to become false, we had to iterate to the first item we wouldn’t return.
  • ToLookup didn’t accept null keys – a fault which propagated to GroupJoin, Join and GroupBy too. (EDIT: It turns out that it’s more subtle than that. Nothing should break, but the MS implementation ignores null keys for Join and GroupJoin. Edulinq now does the same, but I’ve raised a Connect issue to suggest this should at least be documented.)

I’ve fixed these in source control, and will add an addendum to each of the relevant posts (Take, ToLookup) when I have a moment spare.

There’s one additional failure, trying to find the average of a sequence of two Int64.MaxValue values. That overflows on both Edulinq and LINQ to Objects – that’s the downside of using an Int64 to sum the values. As mentioned, Mono suffers a degree of inaccuracy instead; it’s all a matter of trade-offs. (A really smart implementation might use Int64 while possible, and then go up to using Double where necessary, I suppose.)

Unfortunately I don’t have the tests for the Microsoft implementation, of course… I’d love to know whether there’s anything I’ve failed with there.

Conclusion

This was very interesting – there’s a mixture of failure conditions around, and plenty of "non-failures" where each implementation’s tests are enforcing their own behaviour.

I do find it amusing that all three of the "mainstream" implementations have the same OrderByDescending bug though. Other than that, the clear bugs between Mono and LinqBridge don’t intersect, which is slightly surprising.

It’s nice to see that despite not setting out to create a "production-quality" implementation of LINQ to Objects, that’s mostly what I’ve ended up with. Who knows – maybe some aspects of my implementation or tests will end up in Mono in the future :)

Given the various different optimizations mentioned in this post, I think it’s only fitting that next time I’ll discuss where we can optimize, where it’s worth optimizing, and some more tricks we could still pull out of the bag…

Reimplementing LINQ to Objects: Part 38 – What’s missing?

I mentioned before that the Zip operator was only introduced in .NET 4, so clearly there’s a little wiggle room for LINQ to Object’s query operators to grow in number. This post mentions some of the ones I think are most sorely lack – either because I’ve wanted them myself, or because I’ve seen folks on Stack Overflow want them for entirely reasonable use cases.

There is an issue with respect to other LINQ providers, of course: as soon as some useful operators are available for LINQ to Objects, there will be people who want to apply them to LINQ to SQL, the Entity Framework and the like. Worse, if they’re not included in Queryable with overloads based on expression trees, the LINQ to Objects implementation will silently get picked – leading to what looks like a lovely query performing like treacle while the client slurps over the entire database. If they are included in Queryable, then third party LINQ providers could end up with a nasty versioning problem. In other words, some care is needed and I’m glad I’m not the one who has to decide how new features are introduced.

I’ve deliberately not looked at the extra set of operators introduced in the System.Interactive part of Reactive Extensions… nor have I looked back over what we’ve implemented in MoreLINQ (an open source project I started specifically to create new operators). I figured it would be worth thinking about this afresh – but look at both of those projects for actual implementations instead of just ideas.

Currently there’s no implementation of any of this in Edulinq – but I could potentially create an "Edulinq.Extras" assembly which made it all available. Let me know if any of these sounds particularly interesting to see in terms of implementation.

FooBy

I love OrderBy and ThenBy, with their descending cousins. They’re so much cleaner than building a custom comparer which just performs a comparison between two properties. So why stop with ordering? There’s a whole bunch of operators which could do with some "FooBy" love. For example, imagine we have a list of files, and we want to find the longest one. We don’t want to perform a total ordering by size descending, nor do we want to find the maximum file size itself: we want the file with the maximum size. I’d like to be able to write that query as:

FileInfo biggestFile = files.MaxBy(file => file.Length);

Note that we can get a similar result by performing one pass to find the maximum length, and then another pass to find the file with that length. However, that’s inefficient and assumes we can read the sequence twice (and get the same results both times). There’s no need for that. We could get the same result using Aggregate with a pretty complicated aggregation, but I think this is a sufficiently common case to deserve its own operator.

We’d want to specify which value would be returned if multiple files had the same length (my suggestion would be the first one we encountered with that length) and we could also specify a key comparer to use. The signatures would look like this:

public static TSource MaxBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector)

public static TSource MaxBy<TSource, TKey>(
    this IEnumerable<TSource> source,
    Func<TSource, TKey> keySelector,
    IComparer<TKey> comparer)

Now it’s not just Max and Min that gain from this "By" idea. It would be useful to apply the same idea to the set operators. The simplest of these to think about would be DistinctBy, but UnionBy, IntersectBy and ExceptBy would be reasonable too. In the case of ExceptBy and IntersectBy we could potentially take the key collection to indicate the keys of the elements we wanted to exclude/include, but it would probably be more consistent to force the two input sequences to be of the same type (as they would have to be for UnionBy and IntersectBy of course). ContainsBy might be useful, but that would effectively be a Select followed by a normal Contains – possibly not useful enough to merit its own operator.

TopBy and TopByDescending

These may sound like they belong in the FooBy section, but they’re somewhat different: they’re effectively specializations of OrderBy and OrderByDescending where you already know how many elements you want to preserve. The return type would be IOrderedEnumerable<T> so you could still use ThenBy/ThenByDescending as normal. That would make the following two queries equivalent – but the second might be a lot more efficient than the first:

var takeQuery = people.OrderBy(p => p.LastName)
                      .ThenBy(p => p.FirstName)
                      .Take(3);

var topQuery = people.TopBy(p => p.LastName, 3)
                     .ThenBy(p => p.FirstName);

An implementation could easily delegate to various different strategies depending on the number given – for example, if you asked for more than 10 values, it may not be worth doing anything more than a simple sort and restrict the output. If you asked for just the top 3 values, that could return an IOrderedEnumerable implementation specifically hard-coded to 3 values, etc.

Aside from anything else, if you were confident in what the implementation did (and that’s a very big "if") you could use a potentially huge input sequence with such a query – larger than you could fit into memory in one go. That’s fine if you’re only keeping the top three values you’ve seen so far, but would fail for a complete ordering, even one which was able to yield results before performing all the ordering: if it doesn’t know you’re going to stop after three elements, it can’t throw anything away.

Perhaps this is too specialized an operator – but it’s an interesting one to think about. It’s worth noting that this probably only makes sense for LINQ to Objects, which never gets to see the whole query in one go. Providers like LINQ to SQL can optimize queries of the form OrderBy(…).ThenBy(…).Take(…) because by the time they need to translate the query into SQL, they will have an expression tree representation which includes the "Take" part.

TryFastCount and TryFastElementAt

One of the implementation details of Edulinq is its TryFastCount method, which basically encapsulates the logic around attempting to find the count of a sequence if it implements ICollection or ICollection<T>. Various built-in LINQ operators find this useful, and anyone writing their own operators has a reasonable chance of bumping into it as well. It seems pointless to duplicate the code all over the place… why not expose it? The signatures might look something like this:

public static bool TryFastCount<TSource>(
    this IEnumerable<TSource> source,
    out int count)

public static bool TryFastElementAt<TSource>(
    this IEnumerable<TSource> source,
    int index,
    out TSource value)

I would expect TryFastElementAt to use the indexer if the sequence implemented IList<T> without performing any validation: that ought to be the responsibility of the caller. TryFastCount could use a Nullable<int> return type instead of the return value / out parameter split, but I’ve kept it consistent with the methods which exist elsewhere in the framework

Scan and SelectAdjacent

These are related operators in that they deal with wanting a more global view than just the current element. Scan would act similarly to Aggregate – except that it would yield the accumulator value after each element. Here’s an example of keeping a running total:

// Signature:
public static IEnumerable<TAccumulate> Scan<TSource, TAccumulate>(
    this IEnumerable<TSource> source,
    TAccumulate seed,
    Func<TAccumulate, TSource, TAccumulate> func)
    

int[] source = new int[] { 3, 5, 2, 1, 4 };
var query = source.Scan(0, (current, item) => current + item);
query.AssertSequenceEqual(3, 8, 10, 11, 15);

There could be a more complicated overload with an extra conversion from TAccumulate to an extra TResult type parameter. That would let us write a Fibonacci sequence query in one line, if we really wanted to…

The SelectAdjacent operator would simply present a selector function with pairs of adjacent items. Here’s a similar example, this time calculating the difference between each pair:

// Signature:
public static IEnumerable<TResult> SelectAdjacent<TSource, TResult>(
    this IEnumerable<TSource> source,
    Func<TSource, TSource, TResult> selector)
    

int[] source = new int[] { 3, 5, 2, 1, 4 };
var query = source.SelectAdjacent((current, next) => next – current);
query.AssertSequenceEqual(2, -3, -1, 3);

One oddity here is that the result sequence always contains one item fewer than the source sequence. If we wanted to keep the length the same, there are various approaches we could take – but the best one would depend on the situation.

This sounds like a pretty obscure operator, but I’ve actually seen quite a few LINQ questions on Stack Overflow where it could have been useful. Is it useful often enough to deserve its own operator? Maybe… maybe not.

DelimitWith

This one is really just a bit of a peeve – but again, it’s a pretty common requirement. We often want to take a sequence and create a single string which is (say) a comma-delimited version. Yay, String.Join does exactly what we need – particularly in .NET 4, where there’s an overload taking IEnumerable<T> so you don’t need to convert it to a string array first. However, it’s still a static method on string – and the name "Join" also looks slightly odd in the context of a LINQ query, as it’s got nothing to do with a LINQ-style join.

Compare these two queries: which do you think reads better, and feels more "natural" in LINQ?

// Current state of play…
var names = string.Join(",",
                        people.Where(p => p.Age < 18)
                              .Select(p => p.FirstName));

// Using DelimitWith
var names = people.Where(p => p.Age < 18)
                  .Select(p => p.FirstName)
                  .DelimitWith(",");

I know which I prefer :)

ToHashSet

(Added on February 23rd 2011.)

I’m surprised I missed this one first time round – I’ve bemoaned its omission in various places before now. It’s easy to create a list, dictionary, lookup or array from an anonymous type, but you can’t create a set that way. That’s mad, given how simple the relevant operator is, even with an overload for a custom equality comparer:

public static HashSet<TSource> ToHashSet<TSource>(
    this IEnumerable<TSource> source)
{
    return source.ToHashSet(EqualityComparer<TSource>.Default);
}

public static HashSet<TSource> ToHashSet<TSource>(
    this IEnumerable<TSource> source,
    IEqualityComparer<TSource> comparer)
{
    if (source == null)
    {
        throw new ArgumentNullException("source");
    }
    return new HashSet<TSource>(source, comparer ?? EqualityComparer<TSource>.Default);
}

This also makes it much simpler to create a HashSet in a readable way from an existing query expression, without either wrapping the whole query in the constructor call or using a local variable.

Conclusion

These are just the most useful extra methods I thought of, based on the kinds of query folks on Stack Overflow have asked about. I think it’s interesting that some are quite general – MaxBy, ExceptBy, Scan and so on – whereas others (TopBy, SelectAdjacent and particularly DelimitWith) are simply aimed at making some very specific but common situations simpler. It feels to me like the more general operators really are missing from LINQ – they would fit quite naturally – but the more specific ones probably deserve to be in a separate static class, as "extras".

This is only scratching the surface of what’s possible, of course – System.Interactive.EnumerableEx in Reactive Extensions has loads of options. Some of them are deliberate parallels of the operators in Observable, but plenty make sense on their own too.

One operator you may have expected to see in this list is ForEach. This is a controversial topic, but Eric Lippert has written about it very clearly (no surprise there, then). Fundamentally LINQ is about querying a sequence, not taking action on it. ForEach breaks that philosophy, which is why I haven’t included it here. Usually a foreach statement is a perfectly good alternative, and make the "action" aspect clearer.

Reimplementing LINQ to Objects: Part 37 – Guiding principles

Now that I’m "done" reimplementing LINQ to Objects – in that I’ve implemented all the methods in System.Linq.Enumerable – I wanted to write a few posts looking at the bigger picture. I’m not 100% sure of what this will consist of yet; I want to avoid this blog series continuing forever. However, I’m confident it will contain (in no particular order):

  • This post: principles governing the behaviour of LINQ to Objects
  • Missing operators: what else I’d have liked to see in Enumerable
  • Optimization: where the .NET implementation could be further optimized, and why some obvious-sounding optimizations may be inappropriate
  • How query expression translations work, in brief (and with a cheat sheet)
  • The difference between IQueryable<T> and IEnumerable<T>
  • Sequence identity, the "Contains" issue, and other knotty design questions
  • Running the Edulinq tests against other implementations

If there are other areas you want me to cover, please let me know.

The principles behind the LINQ to Objects implementation

The design LINQ to Objects is built on a few guiding principles, both in terms of design and implementation details. You need to understand these, but also implementations should be clear about what they’re doing in these terms too.

Extension method targets and argument validation

IEnumerable<T> is the core sequence type, not just for LINQ but for .NET as a whole. Almost everything is written in terms of IEnumerable<T> at least as input, with the following exceptions:

  • Empty, Range and Repeat don’t have input sequences (these are the only non-extension methods)
  • OfType and Cast work on the non-generic IEnumerable type instead
  • ThenBy and ThenByDescending work on IOrderedEnumerable<T>

All operators other than AsEnumerable verify that any input sequence is non-null. This validation is performed eagerly (i.e. when the method is called) even if the operator uses deferred execution for the results. Any delegate used (typically a projection or predicate of some kind) must be non-null. Again, this validation is performed eagerly.

IEqualityComparer<T> is used for all custom equality comparisons. Any parameter of this type may be null, in which case the default equality comparer for the type is used. In most cases the default equality comparer for the type is also used when no custom equality comparer is used, but Contains has some odd behaviour around this. Equality comparers are expected to be able to handle null values. IComparer<T> is only used by the OrderBy/ThenBy operators and their descending counterparts – and only then if you want custom comparisons between keys. Again, a null IComparer<T> means "use the default for the type"

Timing of input sequence "opening"

Any operator with a return type of IEnumerable<T> or IOrderedEnumerable<T> uses deferred execution. This means that the method doesn’t read anything from any input sequences until someone starts reading from the result sequence. It’s not clearly defined exactly when input sequences will first be accessed – for some operators if may be when GetEnumerator() is called; for others it may be on the first call to MoveNext() on the resulting iterator. Callers should not depend on these slight variations. Deferred execution is common for operators in the middle of queries. Operators which use deferred execution effectively represent queries rather than the results of queries – so if you change the contents of the original source of the query and then iterate over the query itself again, you’ll see the change. For example:

List<string> source = new List<string>();
var query = source.Select(x => x.ToUpper());
        
// This loop won’t write anything out
foreach (string x in query)
{
    Console.WriteLine(x);
}
        
source.Add("foo");
source.Add("bar");

// This loop will write out "FOO" and "BAR" – even
// though we haven’t changed the value of "query"
foreach (string x in query)
{
    Console.WriteLine(x);
}

Deferred execution is one of the hardest parts of LINQ to understand, but once you do, everything becomes somewhat simpler.

All other operators use immediate execution, fetching all the data they need from the input before they return a value… so that by the time they do return, they will no longer see or care about changes to the input sequence. For operators returning a scalar value (such as Sum and Average) this is blatantly obvious – the value of a variable of type double isn’t going to change just because you’ve added something to a list. However, it’s slightly less for the "ToXXX" methods: ToLookup, ToArray, ToList and ToDictionary. These do not return views on the original sequence, unlike the "As" methods: AsEnumerable which we’ve seen, and Queryable.AsQueryable which I didn’t implement. Focus on the prefix part of the name: the "To" part indicates a conversion to a particular type. The "As" prefix indicates a wrapper of some kind. This is consistent with other parts of the framework, such as List<T>.AsReadOnly and Array.AsReadOnly<T>.

Very importantly, LINQ to Objects only iterates over any input sequence at most once, whether the execution is deferred or immediate. Some operators would be easier to implement if you could iterate over the input twice – but it’s important that they don’t do so. Of course if you provide the same sequence for two inputs, it will treat those as logically different sequences. Similarly if you iterate over a result sequence more than once (for operators that return IEnumerable<T> or a related interface, rather than List<T> or an array etc), that will iterate over the input sequence again.

This means it’s fine to use LINQ to Objects with sequences which may only be read once (such as a network stream), or which are relatively expensive to reread (imagine a log file reader over a huge set of logs) or which give inconsistent results (imagine a sequence of random numbers). In some cases it’s okay to use LINQ to Objects with an infinite sequence – in others it’s not. It’s usually fairly obvious which is the case.

Timing of input sequence reading, and memory usage

Where possible within deferred execution, operators act in a streaming fashion, only reading from the input sequence when they have to, and "forgetting" data as soon as they can. This allows for long – potentially infinite – sequences to be handled elegantly without memory running out.

Some operators naturally need to read all the data in before they can return anything. The most obvious example of this is Reverse, which will always yield the last element of the input stream as the first element in the result stream.

A third pattern occurs with operators such as Distinct, which yield data as they go, but accumulate elements too, taking more and more memory until the caller stops iterating (usually either by jumping out of the foreach loop, or letting it terminate naturally).

Where an operator takes two input sequences – such as Join – you need to understand the consumption of each one separately. For example, Join uses deferred execution, but as soon as you ask for the first element of the result set, it will read the "second" sequence completely and buffer it – whereas the "first" sequence is streamed. This isn’t the case for all operators with two inputs, of course – Zip streams both input sequences, for example. Check the documentation – and the relevant Edulinq blog post – for details.

Obviously any operator which uses immediate execution has to read all the data it’s interested in before it returns. This doesn’t necessarily mean they will read to the end of the sequence though, and they may not need to buffer the data they read. (Simple examples are ToList which has to keep everything, and Sum which doesn’t.)

Queries vs data

Closely related to the details of when the input is read is the concept of what the result of an operator actually represents. Operators which use deferred execution return queries: each time you iterate over the result sequence, the query will look at the input sequence again. The query itself doesn’t contain the data – it just knows how to get at the data.

Operators which use immediate execution work the other way round: they read all the data they need, and then forget about the input sequence. For operators like Average and Sum this is obvious as it’s just a simple scalar value – but for operators like ToList, ToDictionary, ToLookup and ToArray, it means that the operator has to make a copy of everything it needs. (This is potentially a shallow copy of course – depending on what user-defined projections are applied. The normal behaviour of mutable reference types is still valid.)

I realise that in many ways I’ve just said the same thing multiple times now – but hopefully that will help this crucial aspect of LINQ behaviour sink in, if you were still in any doubt.

Exception handling

I’m unaware of any situation in which LINQ to Objects will catch an exception. If your predicate or projection throws an exception, it will propagate in the obvious way.

However, LINQ to Objects does ensure that any iterator it reads from is disposed appropriately – assuming that the caller disposes of any result sequences properly, of course. Note that the foreach statement implicitly disposes of the iterator in a finally block.

Optimization

Various operators are optimized when they detect at execution time that the input sequence they’re working on offers a shortcut.

The types most commonly detected are:

  • ICollection<T> and ICollection for their Count property
  • IList<T> for its random access indexer

I’ll look at optimization in much more detail in a separate post.

Conclusion

This post has not been around the guiding principles behind LINQ itself – lambda calculus or anything like that. It’s more been a summary of the various aspects of behaviour we’ve seen across the various operators we’ve implemented. They’re the rules I’ve had to follow in order to make Edulinq reasonably consistent with LINQ to Objects.

Next time I’ll talk about some of the operators which I think should have made it into the core framework, at least for LINQ to Objects.

Gotcha around iterator blocks

This will be a short interlude from Edulinq, although as it will become obvious, it was in the process of writing the next Edulinq post that I made this discovery. It may be known about elsewhere, but it’s the first time I’d come across it – or rather, it’s the first time I’ve considered the impact of the compiler’s decisions in this area.

Sequences and iterators

This post is about details of iterator blocks and memory usage. It’s going to be quite important that you can easily follow exactly what I’m talking about, so I’m going to define a few things up-front.

When I talk about a sequence, I mean an IEnumerable<T>. So a List<T> is a sequence, as is the return value of Enumerable.Range, or many of the other LINQ queries.

When I talk about an iterator, I mean the result of calling GetEnumerator() on a sequence – it’s an IEnumerator<T>. In particular, there can be many iterators corresponding to a single sequence at the same time. A foreach loop will take a sequence and hide the mechanics of obtaining the iterator and calling MoveNext()/Current on it repeatedly.

The C# language specification calls a sequence an enumerable object and it calls an iterator an enumerator object. However, those terms are easy to get mixed up – partly because they differ in so few letters – hence my use of sequence and iterator.

An iterator block is a method declared to return IEnumerable, IEnumerator, IEnumerable<T> or IEnumerator<T>, and containing one or more "yield" statements. The compiler deals with iterator blocks differently to normal methods. It creates an extra type behind the scenes, effectively converting local variables of the iterator block into instance variables in the extra type. The method itself is stripped bare, so that it just creates an object of the new type, without running any of the original source code until MoveNext() is first called on the corresponding iterator (either immediately or after a call to GetEnumerator(), depending on whether the method is declared to return a sequence or an iterator).

Compiler optimizations

The details of exactly how the compiler transforms iterator blocks is outside the scope of the C# specification. It does talk about some details, and gives an implementation example, but there’s plenty of scope for compiler-specific variation.

The Microsoft C# compiler does a "neat trick" which is the source of the gotcha I’m describing today. When it creates a hidden type to represent a sequence, the same type is used to represent the iterator, and the first time that GetEnumerator() is called on the same thread as the one which created the sequence, it returns a reference to "this" rather than creating a new object.

This is easier to follow with an example. Consider this method:

static IEnumerable<int> GetSimpleSequence(int count)
{
    for (int i = 0; i < count; i++)
    {
        yield return i;
    }
}

After compilation, my method ended up looking like this:

private static IEnumerable<int> GetSimpleSequence(int count)
{
    <GetSimpleSequence>d__0 d__ = new <GetSimpleSequence>d__0(-2);
    d__.<>3__count = count;
    return d__;
}

We can see that the <>3__count field of the generated type represents the initial value for the "count" parameter – although my code happens not to change the parameter value, it’s important that it’s preserved for all iterators generated from the same sequence. The -2 argument to the constructor is to represent the state of the object: -2 is used to represent a sequence rather than an iterator.

I’m not going into all the details of the generated type in this post (I have an article which goes into some more details) but the important part is the GetEnumerator() method:

[DebuggerHidden]
IEnumerator<int> IEnumerable<int>.GetEnumerator()
{
    Test.<GetSimpleSequence>d__0 d__;
    if ((Thread.CurrentThread.ManagedThreadId == this.<>l__initialThreadId
        && (this.<>1__state == -2))
    {
        this.<>1__state = 0;
        d__ = this;
    }
    else
    {
        d__ = new Test.<GetSimpleSequence>d__0(0);
    }
    d__.count = this.<>3__count;
    return d__;
}

The line of "d__ = this;" is the crucial one for the purpose of this post: a sequence’s first iterator is itself.

We’ll look into the negative ramifications of this in a minute, but first let’s think about the upsides. Basically, it means that if we only need to iterate over the sequence once, we only need a single object. It’s reasonable to suspect that the "fetch and iterate once" scenario is an extremely common one, so it’s fair to optimize for it. I’m not sure it’s actually worth the extra complexity introduced though – especially when there can be a heavy penalty in some odd cases. I assume Microsoft has performed more performance testing of this than I have, but they may not have considered the specific issue mentioned in this post. Let’s look at it now.

The bulky iterator problem

I’ll present the problem using the context in which I discovered it: the LINQ "Distinct" operator. First I’ll demonstrate it using the LINQ to Objects implementation, then show a few fixes we can make if we use our own one.

Here’s the sample program we’ll be using throughout, sometimes with a few modifications:

using System;
using System.Collections.Generic;
using System.Linq;

class Test
{
    static void Main()
    {
        var source = Enumerable.Range(0, 1000000);
        var query = source.Distinct();
        
        ShowMemory("Before Count()");
        query.Count();
        ShowMemory("After Count()");
        GC.KeepAlive(query);
        ShowMemory("After query eligible for GC");
    }
    
    static void ShowMemory(string explanation)
    {
        long memory = GC.GetTotalMemory(true);
        Console.WriteLine("{0,-30} {1}",
                          explanation + ":",
                          memory);
    }
}

Obviously ShowMemory is just a diagnostic method. What we’re interested in is what happens in Main:

  • We create a source sequence. In this case it’s a range, but it could be anything that returns a lot of distinct elements.
  • We create a query by calling Distinct. So far, so good – we haven’t done anything with the range yet, or iterated over anything. We dump the total memory used.
  • We count the elements in the query results: this has to iterate over the query, and will dispose of the iterator afterwards. We dump the total memory used.
  • We call GC.KeepAlive simply to make sure that the query sequence itself isn’t eligible for garbage collection until this point. We no longer explicitly have a reference to the iterator used by Count, but we do have a reference to the sequence up until now.
  • We dump memory one final time… but because this occurs after the GC.KeepAlive call (and there are no later references to query) the sequence can be disposed.

Note that the GetTotalMemory(true) call performs a full GC each time. I dare say it’s not foolproof, but it gives a pretty reasonable indication of memory that can’t be collected yet.

Now in a sensible world, I would expect the memory to stay reasonably constant. The Count() method may well generate a lot of garbage as it iterates over the query, but that’s fine – it should all be transient, right?

Wrong. Here’s one sample set of results:

Before Count():                47296
After Count():                 16828480
After query eligible for GC:   51140

Until we allow the sequence to be garbage collected, we’re left with all the garbage which was associated with the iterator used by Count. "All the garbage" is a set of elements which have previously been returned – Distinct needs to remember them all to make sure it doesn’t return any duplicates.

The problem is the "optimization" performed by the C# compiler: the iterator used by Count() is the same object that "query" refers to, so it’s got a reference to that large set of integers… a set we no longer actually care about.

As soon as the sequence can be garbage collected, we’re back down to modest memory use – the difference between the first and third lines of output is probably due to the code being loaded and JIT compiled over the course of the program.

Note that if we extended our query to call Where, Select etc after Distinct, we’d have the same problem – each sequence in the chain would keep a reference to its own "source" sequence, and eventually you’d get back to the Distinct sequence and all its post-iteration garbage.

Okay, enough doom and gloom… how can we fix this?

Solution #1: undo the compiler optimization (caller code)

If we’re aware of the problem, we can sometimes fix it ourselves, even if we only own the "client" code. Our sample program can be fixed with a single line:

using (query.GetEnumerator());

Just put that anywhere before the call to Count(), and we’re in the clear. The compiler gives a warning due to a possibly-accidental empty block, mind you… why would we want to ask for a value only to dispose it?

If you remember, the GetEnumerator method only returns "this" the first time you call it. So we’re just creating that iterating and immediately disposing of it. None of the code that was originally within the Distinct implementation will be run, so we won’t generate any garbage.

Sure enough, the results bear out the theory:

Before Count():                47296
After Count():                 51228
After query eligible for GC:   51140

Unfortunately, this relies on two things:

  • Every caller has to know this is a potential problem. I’ve never heard anyone mention it before and I only discovered it myself yesterday. Note that it’s only likely to be a problem for sequence types which are generated by the C# compiler… how much do you want to be at the mercy of the implementation details of whatever you’re calling?
  • It only works if you can force GetEnumerator() to be called on the problematic sequence. If we add a call to Select(x => x) to the end of our query declaration, we’re back to square one – because calling GetEnumerator() on the Select sequence doesn’t force GetEnumerator() to be called on the Distinct sequence.

It’s an interesting fix, but basically not a good idea in the long run.

Solution #2: clean up in the iterator block (implementation code)

Obviously I can’t change LINQ to Objects, so at this point I’ll resort to a very simplistic custom implementation of Distinct. I’m not going to bother about the overload with a custom equality comparer, and I won’t even perform argument validation – although I will split the implementation into two methods as we’d have to for normal argument validation to work eagerly. That will be useful later. Here’s the code in its broken state:

public static class CustomEnumerable
{
    public static IEnumerable<TSource> Distinct<TSource>(
        this IEnumerable<TSource> source)
    {
        // Argument validation elided
        return DistinctImpl(source);
    }
    
    private static IEnumerable<TSource> DistinctImpl<TSource>(
        IEnumerable<TSource> source)
    {
        HashSet<TSource> seen = new HashSet<TSource>();
        foreach (TSource item in source)
        {
            if (seen.Add(item))
            {
                yield return item;
            }
        }
    }
}

Readers who’ve been following my Edulinq series should be pretty familiar with this. It suffers from the same problem as the LINQ to Objects implementation, which is what I’d expect. Now let’s do something really horrible – set a "local" variable to null when we know it will no longer be referenced within the method:

private static IEnumerable<TSource> DistinctImpl<TSource>(
    IEnumerable<TSource> source)
{
    HashSet<TSource> seen = new HashSet<TSource>();
    try
    {
        foreach (TSource item in source)
        {
            if (seen.Add(item))
            {
                yield return item;
            }
        }
    }
    finally
    {
        seen = null;
    }
}

This is horrible code. If I hadn’t come across this problem, and someone presented me with the above code in a code review, I would by sighing and shaking my head. Setting variables to null at the end of the method for the sake of garbage collection normally shows a hideous misunderstanding of how variables and the GC work.

In this case, it fixes the problem… but at the cost of the ability to look our fellow developers in the eye.

Solution #3: Side-step the compiler optimization (implementation code)

The problem we’re facing stems from the compiler optimization used when an iterator block returns a sequence type. What if it only returns an iterator? Well, we need something to implement IEnumerable<T>, so how about we create a type which just delegates the responsibility:

public sealed class DelegatingSequence<T> : IEnumerable<T>
{
    private readonly Func<IEnumerator<T>> func;
    
    public DelegatingSequence(Func<IEnumerator<T>> func)
    {
        this.func = func;
    }
    
    public IEnumerator<T> GetEnumerator()
    {
        return func();
    }
    
    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
}

Now we know that the sequence will be separate from the iterator. Of course we could be given a delegate which does the wrong thing, but it’s easy to use correctly. Here’s the corresponding code for Distinct:

public static IEnumerable<TSource> Distinct<TSource>(
    this IEnumerable<TSource> source)
{
    // Argument validation elided
    return new DelegatingSequence<TSource>(() => DistinctImpl(source));
}
    
private static IEnumerator<TSource> DistinctImpl<TSource>(
    IEnumerable<TSource> source)
{
    // Original code without try/finally
}

This now solves the "bulky iterator problem", and it’s a pretty easy fix to propagate wherever it’s necessary… but again, only if you’re aware of the issue to begin with.

Solution #4: Make the optimization optional (compiler code)

There are already a few attributes which change the generated IL: we could add another one to suppress this optimization, and apply it to any iterator blocks we want to behave differently:

[AliasSequenceAndIterator(false)]
private static IEnumerable<TSource> DistinctImpl<TSource>(
    IEnumerable<TSource> source)

I very much doubt that this will make it into the C# compiler any time soon – and frankly I’m not too happy with it anyway, especially as it still requires anyone using an iterator block to be aware of the issue.

Solution #5: Make the compiler perform solution #2 automatically (compiler code)

The compiler knows what "local variables" we’re using. It’s already implementing a Dispose method for various reasons, including setting the iterator state appropriately. Why can’t it just set all the fields associated with local variables from the source code to their default values as part of disposal?

There’s one reason I can think of, although it would come up rarely: the variables might be captured by anonymous functions within the iterator block. The delegates created from those anonymous functions may still be alive somewhere. Fortunately, the compiler is presumably aware of which variables have been captured. It could simply have a policy of turning off the optimization completely for iterator blocks which capture variables in anonymous functions, and clearing the appropriate variables for all the other situations.

At the same time, the generated code could be changed to clear the field which underlies the Current property – at the moment, if you ask an auto-generated iterator for Current after you’ve finished iterating over it, the result will be the last value yielded. This could be another subtle pseudo-leak, unlikely as it is. (Aside from anything else, it’s just not neat at the moment.)

Solution #6: Just disable the optimization completely (compiler code)

I’d love to see the data on which Microsoft based the decision to optimize the generated code this way. I have confidence that there is such data somewhere – although I wonder whether it has been updated since the advent of LINQ, which I suspect is one of the heaviest users of autogenerated iterators. It’s quite possible that this is still a "win" in most cases – just like the dodgy-sounding mutable struct iterator returned by the public List<T>.GetEnumerator is a win in very many cases, and only causes issues if you’re being awkward. I think it’s worth a re-evaluation though :)

If the optimization were removed, there would be two options:

  • Stick with a single generated type impementing both the sequence and iterator interfaces, but always create a new instance for iterators
  • Create one sequence type and one iterator type

Personally the second sounds cleaner to me – and it means that each instance would only need the fields relevant to it. But cleanliness isn’t terribly important for generated code, so long as it doesn’t have significant issues, so I wouldn’t be too upset if the team took the first approach. I expect that in either case it would simplify the code – there’d be no need to remember the thread a sequence was created on, and so on.

Conclusion

The very fact that I’ve never heard of this problem before suggests that it’s not very serious – but it feels serious. While it’s possibly rare to hold onto a query for a long time, it ought to be reasonable to do so without incurring a memory penalty related to the size of the query the first time it was executed. This doesn’t affect all query operators, of course – and I haven’t investigated which ones it does affect. The ones I’d look at are those which have to buffer their data for some reason – including ones which operate on two sequences, buffering one and streaming the other.

Out of all the possible solutions, I would personally favour 5 or 6. This really ought to be a compiler fix – it’s unreasonable to suggest that all developers should become aware of this idiosyncrasy and work round it. The part of me which longs for simplicity prefers solution 6; in reality I suspect that 5 is more likely. It’s quite possible that the C# team will decide that it’s a sufficiently harmless situation that they won’t fix it at all – and I’m not going to claim to know their product and priorities better than they do. (Obviously I’m making them aware of my findings, but not in a fix-this-or-else way.)

Whatever happens, it’s been a real blast investigating this… I never get tired of learning more about the details of source transformations and their (sometimes inadvertent) side-effects.

Reimplementing LINQ to Objects: Part 36 – AsEnumerable

Our last operator is the simplest of all. Really, really simple.

What is it?

AsEnumerable has a single signature:

public static IEnumerable<TSource> AsEnumerable<TSource>(this IEnumerable<TSource> source)

I can describe its behaviour pretty easily: it returns source.

That’s all it does. There’s no argument validation, it doesn’t create another iterator. It just returns source.

You may well be wondering what the point is… and it’s all about changing the compile-time type of the expression. I’m going to take about IQueryable<T> in another post (although probably not implement anything related to it) but hopefully you’re aware that it’s usually used for "out of process" queries – most commonly in databases.

Now it’s not entirely uncommon to want to perform some aspects of the query in the database, and then a bit more manipulation in .NET – particularly if there are aspects you basically can’t implement in LINQ to SQL (or whatever provider you’re using). For example, you may want to build a particular in-memory representation which isn’t really amenable to the provider’s model.

In that case, a query can look something like this:

var query = db.Context
              .Customers
              .Where(c => some filter for SQL)
              .OrderBy(c => some ordering for SQL)
              .Select(c => some projection for SQL)
              .AsEnumerable() // Switch to "in-process" for rest of query
              .Where(c => some extra LINQ to Objects filtering)
              .Select(c => some extra LINQ to Objects projection);

All we’re doing is changing the compile-time type of the sequence which is propagating through our query from IQueryable<T> to IEnumerable<T> – but that means that the compiler will use the methods in Enumerable (taking delegates, and executing in LINQ to Objects) instead of the ones in Queryable (taking expression trees, and usually executing out-of-process).

Sometimes we could do this with a simple cast or variable declaration. However, for one thing that’s ugly, whereas the above query is fluent and quite readable, so long as you appreciate the importance of AsEnumerable. The more important point is that it’s not always possible, because we may very well be dealing with a sequence of an anonymous type. An extension method lets the compiler use type inference to work out what the T should be for IEnumerable<T>, but you can’t actually express that in your code.

In short – it’s not nearly as useless an operator as it seems at first sight. That doesn’t make it any more complicated to test or implement though…

What are we going to test?

In the spirit of exhaustive testing, I have actually tested:

  • A normal sequence
  • A null reference
  • A sequence which would throw an exception if you actually tried to use it

The tests just assert that the result is the same reference as we’ve passed in.

I have one additional test which comes as close as I can to demonstrating the point of AsEnumerable without using Queryable:

[Test]
public void AnonymousType()
{
    var list = new[] { 
        new { FirstName = "Jon", Surname = "Skeet" },
        new { FirstName = "Holly", Surname = "Skeet" }
    }.ToList();

    // We can’t cast to IEnumerable<T> as we can’t express T.
    var sequence = list.AsEnumerable();
    // This will now use Enumerable.Contains instead of List.Contains
    Assert.IsFalse(sequence.Contains(new { FirstName = "Tom", Surname = "Skeet" }));
}

And finally…

Let’s implement it!

There’s not much scope for an interesting implementation here I’m afraid. Here it is, in its totality:

public static IEnumerable<TSource> AsEnumerable<TSource>(this IEnumerable<TSource> source)
{
    return source;
}

It feels like a fittingly simple end to the Edulinq implementation.

Conclusion

I think that’s all I’m going to actually implement from LINQ to Objects. Unless I’ve missed something, that covers all the methods of Enumerable from .NET 4.

That’s not the end of this series though. I’m going to take a few days to write up some thoughts about design choices, optimizations, other operators which might have been worth including, and a little bit about how IQueryable<T> works.

Don’t forget that the source code is freely available on Google Code. I’ll be happy to patch any embarrassing bugs :)

Reimplementing LINQ to Objects: Part 35 – Zip

Zip will be a familiar operator to any readers who use Python. It was introduced in .NET 4 – it’s not entirely clear why it wasn’t part of the first release of LINQ, to be honest. Perhaps no-one thought of it as a useful operator until it was too late in the release cycle, or perhaps implementing it in the other providers (e.g. LINQ to SQL) took too long. Eric Lippert blogged about it in 2009, and I find it interesting to note that aside from braces, layout and names we’ve got exactly the same code. (I read the post at the time of course, but implemented it tonight without looking back at what Eric had done.) It’s not exactly surprising, given how trivial the implementation is. Anyway, enough chit-chat…

What is it?

Zip has a single signature, which isn’t terribly complicated:

public static IEnumerable<TResult> Zip<TFirst, TSecond, TResult>(
    this IEnumerable<TFirst> first,
    IEnumerable<TSecond> second,
    Func<TFirst, TSecond, TResult> resultSelector)

Just from the signature, the name, and experience from the rest of this blog series it should be easy enough to guess what Zip does:

  • It uses deferred execution, not reading from either sequence until the result sequence is read
  • All three parameters must be non-null; this is validated eagerly
  • Both sequences are iterated over "at the same time": it calls GetEnumerator() on each sequence, then moves each iterator forward, then reads from it, and repeats.
  • The result selector is applied to each pair of items obtained in this way, and the result yielded
  • It stops when either sequence terminates
  • As a natural consequence of how the sequences are read, we don’t need to perform any buffering: we only care about one element from each sequence at a time.

There are really only two things that I could see might have been designed differently:

  • It could have just returned IEnumerable<Tuple<TFirst, TSecond>> but that would have been less efficient in many cases (in terms of the GC) and inconsistent with the rest of LINQ
  • It could have provided different options for what to do with sequences of differents lengths. For example:
    • Throw an exception
    • Use the default value of the shorter sequence type against the remaining items of the longer sequence
    • Use a specified default value of the shorter sequence in the same way

I don’t have any problem with the design that’s been chosen here though.

What are we going to test?

There are no really interesting test cases here. We test argument validation, deferred execution, and the obvious "normal" cases. I do have tests where "first" is longer than "second" and vice versa.

The one test case which is noteworthy isn’t really present for the sake of testing at all – it’s to demonstrate a technique which can occasionally be handy. Sometimes we really want to perform a projection on adjacent pairs of elements. Unfortunately there’s no LINQ operator to do this naturally (although it’s easy to write one) but Zip can provide a workaround, so long as we don’t mind evaluating the sequence twice. (That could be a problem in some cases, but is fine in others.)

Obviously if you just zip a sequence with itself directly you get each element paired with the same one. We effectively need to "shift" or "delay" one sequence somehow. We can do this using Skip, as shown in this test:

[Test]
public void AdjacentElements()
{
    string[] elements = { "a", "b", "c", "d", "e" };
    var query = elements.Zip(elements.Skip(1), (x, y) => x + y);
    query.AssertSequenceEqual("ab", "bc", "cd", "de");
}

It always takes me a little while to work out whether I want to make first skip or second – but if we want the second element as the first element of second (try getting that right ten times in a row – it makes sense, honest!) means that we want to call Skip on the sequence used as the argument for second. Obviously it would work the other way round too – we’d just get the pairs presented with the values switched, so the results of the query above would be "ba", "cb" etc.

Let’s implement it!

Guess what? It’s yet another operator with a split implementation between the argument validation and the "real work". I’ll skip argument validation, and get into the tricky stuff. Are you ready? Sure you don’t want another coffee?

private static IEnumerable<TResult> ZipImpl<TFirst, TSecond, TResult>(
    IEnumerable<TFirst> first,
    IEnumerable<TSecond> second,
    Func<TFirst, TSecond, TResult> resultSelector)
{
    using (IEnumerator<TFirst> iterator1 = first.GetEnumerator())
    using (IEnumerator<TSecond> iterator2 = second.GetEnumerator())
    {
        while (iterator1.MoveNext() && iterator2.MoveNext())
        {
            yield return resultSelector(iterator1.Current, iterator2.Current);
        }
    }
}

Okay, so possibly "tricky stuff" was a bit of an overstatement. Just about the only things to note are:

  • I’ve "stacked" the using statements instead of putting the inner one in braces and indenting it. For using statements with different variable types, this is one way to keep things readable, although it can be a pain when tools try to reformat the code. (Also, I don’t usually omit optional braces like this. It does make me feel a bit dirty.)
  • I’ve used the "symmetric" approach again instead of a using statement with a foreach loop inside it. That wouldn’t be hard to do, but it wouldn’t be as simple.

That’s just about it. The code does exactly what it looks like, which doesn’t make for a very interesting blog post, but does make for good readability.

Conclusion

Two operators to go, one of which I might not even tackle fully (AsQueryable – it is part of Queryable rather than Enumerable, after all).

AsEnumerable should be pretty easy…

Reimplementing LINQ to Objects: Part 34 – SequenceEqual

Nearly there now…

What is it?

SequenceEqual has two overloads – the obvious two given that we’re dealing with equality:

public static bool SequenceEqual<TSource>(
    this IEnumerable<TSource> first,
    IEnumerable<TSource> second)

public static bool SequenceEqual<TSource>(
    this IEnumerable<TSource> first,
    IEnumerable<TSource> second,
    IEqualityComparer<TSource> comparer)

The purpose of the operator is to determine if two sequences are equal; that is, if they consist of the same elements, in the same order. A custom equality comparer can be used to compare each individual pair of elements. Characteristics:

  • The first and second parameters mustn’t be null, and are validated immediately.
  • The comparer parameter can be null, in which case the default equality comparer for TSource is used.
  • The overload without a comparer uses the default equality comparer for TSource (no funny discrepancies if ICollection<T> instances are involved, this time).
  • It uses immediate execution.
  • It returns as soon as it notices a difference, without evaluating the rest of either sequence.

So far, so good. Note that no optimizations are mentioned above. There are questions you might consider:

  • Should foo.SequenceEqual(foo) always return true?
  • If either or both of the sequences implements another collection interface, does that help us?

The first question sounds like it should be a no-brainer, but it’s not as simple as it sounds. Suppose we have a sequence which always generates 10 random numbers. Is it equal to itself? If you iterate over it twice, you’ll usually get different results. What about a sequence which explicitly changes each time you iterate over it, based on some side-effect? Both the .NET and Edulinq implementations say that these are non-equal. (The random case is interesting, of course – it could happen to yield the same elements as we iterate over the two sequences.)

The second question feels a little simpler to me. We can’t take a shortcut to returning true, but it seems reasonably obvious to me that if you have two collections which allow for a fast "count" operation, and the two counts are different, then the sequences are unequal. Unfortunately, LINQ to Objects appears not to optimize for this case: if you create two huge arrays of differing sizes but equal elements as far as possible, it will take a long time for SequenceEqual to return false. Edulinq does perform this optimization. Note that just having one count isn’t useful: you might expect it to, but it turns out that by the time we could realize that the lengths were different in that case, we’re about to find that out in the "normal" fashion anyway, so there’s no point in complicating the code to make use of the information.

What are we going to test?

As well as the obvious argument validation, I have tests for:

  • Collections of different lengths
  • Ranges of different lengths, both with first shorter than second and vice versa
  • Using a null comparer
  • Using a custom comparer
  • Using no comparer
  • Equal arrays
  • Equal ranges
  • The non-optimization of foo.SequenceEquals(foo) (using side-effects)
  • The optimization using Count (fails on LINQ to Objects)
  • Ordering: { 1, 2 } should not be equal to { 2, 1 }
  • The use of a HashSet<string> with a case-insensitive comparer: the default (case-sensitive) comparer is still used when no comparer is provided
  • Infinite first sequence with finite second sequence, and vice versa
  • Sequences which differ just before they would go bang

None of the test code is particularly interesting, to be honest.

Let’s implement it!

I’m not going to show the comparer-less overoad, as it just delegates to the one with a comparer.

Before we get into the guts of SequenceEqual, it’s time for a bit of refactoring. If we’re going to optimize for count, we’ll need to perform the same type tests as Count() twice. That would be horrifically ugly inline, so let’s extract the functionality out into a private method (which Count() can then call as well):

private static bool TryFastCount<TSource>(
    IEnumerable<TSource> source,
    out int count)
{
    // Optimization for ICollection<T>
    ICollection<TSource> genericCollection = source as ICollection<TSource>;
    if (genericCollection != null)
    {
        count = genericCollection.Count;
        return true;
    }

    // Optimization for ICollection
    ICollection nonGenericCollection = source as ICollection;
    if (nonGenericCollection != null)
    {
        count = nonGenericCollection.Count;
        return true;
    }
    // Can’t retrieve the count quickly. Oh well.
    count = 0;
    return false;
}

Pretty simple. Note that we always have to set the out parameter to some value. We use 0 on failure – which happens to work out nicely in the Count, as we can just start incrementing if TryFastCount has returned false.

Now we can make a start on SequenceEqual. Here’s the skeleton before we start doing the real work:

public static bool SequenceEqual<TSource>(
    this IEnumerable<TSource> first,
    IEnumerable<TSource> second,
    IEqualityComparer<TSource> comparer)
{
    if (first == null)
    {
        throw new ArgumentNullException("first");
    }
    if (second == null)
    {
        throw new ArgumentNullException("second");
    }

    int count1;
    int count2;
    if (TryFastCount(first, out count1) && TryFastCount(second, out count2))
    {
        if (count1 != count2)
        {
            return false;
        }
    }

    comparer = comparer ?? EqualityComparer<TSource>.Default;

    // Main part of implementation goes here
}

I could have included the comparison between count1 and count2 within the single "if" condition, like this:

if (TryFastCount(first, out count1) && 
    TryFastCount(second, out count2) &&
    count1 != count2)
{
    return false;
}

… but I don’t usually like using the values of out parameters like this. The behaviour is well-defined and correct, but it just feels a little ugly to me.

Okay, now let’s implement the "Main part" which at the bottom of the skeleton. The idea is simple:

  • Get the iterators for both sequences
  • Use the iterators "in parallel" (not in the multithreading sense, but in the movement of the logical cursor down the sequence) to compare pairs of elements; we can return false if we ever see an unequal pair
  • If we ever see that one sequence has finished and the other hasn’t, we can return false
  • If we get to the end of both sequences in the same iteration, we can return true

I’ve got three different ways of representing the basic algorithm in code though. Fundamentally, the problem is that we don’t have a way of iterating over pairs of elements in two sequences with foreach – we can’t use one foreach loop inside another, for hopefully obvious reasons. So we’ll have to call GetEnumerator() explicitly on at least one of the sequences… and we could do it for both if we want.

The first implementation (and my least favourite) does use a foreach loop:

using (IEnumerator<TSource> iterator2 = second.GetEnumerator())
{
    foreach (TSource item1 in first)
    {
        // second is shorter than first
        if (!iterator2.MoveNext())
        {
            return false;
        }
        if (!comparer.Equals(item1, iterator2.Current))
        {
            return false;
        }
    }
    // If we can get to the next element, first was shorter than second.
    // Otherwise, the sequences are equal.
    return !iterator2.MoveNext();
}

I don’t have a desperately good reason for picking them this way round (i.e. foreach over first, and GetEnumerator() on second) other than that it seems to still give primacy to first somehow… only first gets the "special treatment" of a foreach loop. (I can almost hear the chants now, "Equal rights for second sequences! Don’t leave us out of the loop! Stop just ‘using’ us!") Although I’m being frivolous, I dislike the asymmetry of this.

The second attempt is a half-way house: it’s still asymmetric, but slightly less so as we’re explicitly fetching both iterators:

using (IEnumerator<TSource> iterator1 = first.GetEnumerator(),
                            iterator2 = second.GetEnumerator())
{
    while (iterator1.MoveNext())
    {
        // second is shorter than first
        if (!iterator2.MoveNext())
        {
            return false;
        }
        if (!comparer.Equals(iterator1.Current, iterator2.Current))
        {
            return false;
        }
    }
    // If we can get to the next element, first was shorter than second.
    // Otherwise, the sequences are equal.
    return !iterator2.MoveNext();
}

Note the use of the multi-variable "using" statement; this is equivalent nesting one statement inside another, of course.

The similarities between these two implementations are obvious – but the differences are worth pointing out. In the latter approach, we call MoveNext() on both sequences, and then we access the Current property on both sequences. In each case we use iterator1 before iterator2, but it still feels like they’re being treated more equally somehow. There’s still the fact that iterator1 is being used in the while loop condition, whereas iterator2 has to be used both inside and outside the while loop. Hmm.

The third implementation takes this even further, changing the condition of the while loop:

using (IEnumerator<TSource> iterator1 = first.GetEnumerator(),
       iterator2 = second.GetEnumerator())
{
    while (true)
    {
        bool next1 = iterator1.MoveNext();
        bool next2 = iterator2.MoveNext();
        // Sequences aren’t of same length. We don’t
        // care which way round.
        if (next1 != next2)
        {
            return false;
        }
        // Both sequences have finished – done
        if (!next1)
        {
            return true;
        }
        if (!comparer.Equals(iterator1.Current, iterator2.Current))
        {
            return false;
        }
    }

This feels about as symmetric as we can get. The use of next1 in the middle "if" condition is incidental – it could just as easily be next2, as we know the values are equal. We could switch round the order of the calls to MoveNext(), the order of arguments to comparer.Equals – the structure is symmetric.

I’m not generally a fan of while(true) loops, but in this case I think I rather like it. It makes it obvious that we’re going to keep going until we’ve got a good reason to stop: one of the three return statements. (I suppose I should apologise to fans of the dogma around single exit points for methods, if any are reading. This must be hell for you…)

Arguably this is all a big fuss about nothing – but writing Edulinq has given me a new appreciation for diving into this level of detail to find the most readable code. As ever, I’d be interested to hear your views. (All three versions are in source control. Which one is active is defined with a #define.)

Conclusion

I really don’t know why Microsoft didn’t implement the optimization around different lengths for SequenceEqual. Arguably in the context of LINQ you’re unlikely to be dealing with two materialized collections at a time – it’s much more common to have one collection and a lazily-evaluated query, or possibly just two queries… but it’s a cheap optimization and the benefits can be significant. Maybe it was just an oversight.

Our next operator also deals with pairs of elements, so we may be facing similar readability questions around it. It’s Zip – the only new LINQ query operator in .NET 4.