Category Archives: Books

How many Jedi?

(There’s no technical content in this post… but you may get a bit of a giggle from it. When I get the second edition web site notes together I’ll include this as well… but I thought it was fun enough to deserve a blog post too.)

The second edition of C# in Depth is nearing the end of its technical review cycle, as performed by the great Eric Lippert. Yesterday I received the comments for chapter 13, which includes this section heading:

The revenge of optional parameters and named arguments

Now, my copy editor (Ben) wasn’t too keen on this. He suggested an alternative form:

I think "have their revenge" has more of a ring to it than "the revenge of"

Personally I’m quite fond of the original, but Eric suggested another alternative, with customary flair:

I’m not buying it Ben. Your way vs Jon’s way:
"The Clones Attack"       / "Attack of the Clones"
"The Sith Have Revenge"   / "The Revenge of the Sith"
"The Empire Strikes Back" / "The Counter-attack of the Empire"
"The Jedi Return"         / "The Return of the Jedi"

I would argue – I have before and I will again – that the proper title for episode two is not the passive, wimpy "Attack of the Clones" but rather the aggressive, dynamic, active "The Clones Attack", preferably with an exclamation point, "The Clones Attack!"

"The Sith Have Revenge" has that awkward verb form. "The Counter-attack of the Empire" is too wordy. And "The Jedi Return" is just… wrong.  So I would score these as the winners being Ben / Jon / Ben / Jon.

I say "the revenge of" is superior to "have their revenge", but that the best would be "Optional and named parameters strike back".

Also, NOOOOOO! You’re not my father!

This intrigued me mightily, so I dashed off an email to Eric:

Hi Eric,

I’m just going through your notes for chapter 13, and they’ve brought up an issue which I think would bother me if I didn’t consult you about it.

You suggested that the alternative to "Return of the Jedi" (1) would be "The Jedi Return." That implies multiple Jedi returning – does this include Anakin returning from the Dark Side? Leia’s nascent ability being revealed? I had always imagine it to only refer to Luke’s return, suggesting "The Jedi Returns" as the parallel title. This could change everything.

Jon

—-

(1) There’s no leading "The" in the English title, as far as I can tell – although in French it’s "Le retour du Jedi." Does this alter your argument at all?

Eric’s reply was as prompt as ever:

First off, you’re right, there’s no leading “The”. I had not realized that.

I had always assumed that the “Jedi” of the title “Return of the Jedi” referred to the beginning of the restoration of the Jedi as an institution. With the downfall of the Emperor and Lord Vader, the Jedi are back. Even though technically there’s only one of them alive in the club right now, there will be more.

However, I must admit that in light of episodes one through three, it now seems plausible that in fact the Jedi referred to in the title is neither the Jedi as a class nor Luke as an individual, but rather the redemption of Anakin.

Beyond the dialogue…

So, that’s the end of that, right? We can’t really tell what Lucas was thinking… except that when I relayed all of this at the office over breakfast, someone suggested that we have a look at some other translations – and that we pay more attention to the French than just the inclusion of "Le" to start with.

The fact that the French version uses "du" suggests it’s the return of a singular Jedi rather than many individual Jedi knights… but apparently the singular form can also be used for a collective institution, in line with Eric’s "Jedi as a class" theory.

The German version is still ambiguous, as far as I can tell interesting: "Die Rückkehr der Jedi-Ritter" – a colleague suggested that this implies knights plural, but "the return of the knight" and "the return of the knights" translate the same way in Google Translate. The fact that "ritter" is both plural and singular (like sheep in English) looks like it foils us. EDIT: As noted in comments, the genetive form would be "des" for a singular knight. So it really is "knights". I was misled by automated translation – I should have trusted my colleague :) But does this mean "the return of several individual Jedi knights" or "the return of an institution of Jedi knights"? Without knowing the subtleties of the German language, I certainly can’t tell for sure.

There’s a whole host of titles of course – if any reader gifted in languages wishes to analyse some more of them, I’d be very grateful. One thing I will point out is the alternative US working title: "Revenge of the Jedi." Who really had their revenge in episode VI? Arguably Luke avenged Han by killing Jabba… and perhaps Anakin himself took revenge on the Emperor? Given that Obi Wan effectively started Luke on the crusade against Vader, perhaps it’s his revenge from beyond the grave?

These are serious matters which I lament I am unable to explore adequately in this post – but comments are more welcome than ever.

Conclusion

So what happened to the heading in the end? Well, for the moment I’ve left it as it is. It may yet change before printing though – we’ll see. Possibly I should take this opportunity to make Eric’s dream change apply in a different context… how about "Attack of the optional parameters and named arguments!" Perhaps not.

Anyway, I’m sure you will all be glad to see that such weighty technical matters are being given the thorough attention they deserve. Yes, the book really will come out sometime reasonably soon.

How do we raise our game?

A couple of weeks ago, I was in the Seattle area for work reasons. I took the opportunity to meet up with a lot of smart folks, including some working on the Reactive Extensions team and the C# team. I asked pretty much the same question of almost everyone:

How is Microsoft intending to make developers smarter?

Let me explain what I mean a bit more clearly…

The problem

Now, let me be quite clear: I only have visibility of a small section of the community. In particular, I get to see:

  • Comments from blog readers
  • Questions and answers on Stack Overflow
  • MSDN forum posts (typically the Code Contracts and Reactive Extensions forums, and then only occasionally)
  • Random emails, often from C# in Depth readers
  • User group and conference attendees

Intuition would suggest that these are some of the more interested developers. Not necessarily the smartest, but ones who are at least engaged with the community. It’s been a while since I’ve worked in a regular company – I don’t think Google engineers are representative of software engineers as a whole – so I’m not going to suggest I have a good grasp of how able the "disengaged" engineers are. I doubt that they’re significantly smarter though.

My concern is that some of the new technologies that I’m getting excited by (in particular Parallel Extensions, Code Contracts and Reactive Extensions) may make it easier for awesome people to write awesome code – but leave everyone else behind. Even though I’m excited about Reactive Extensions, I still regularly get confused by it – this is partly a matter of thin documentation (a natural corollary of an API which is still under development; I’m sure it will improve markedly over time) but partly because a push model is simply harder to think about.

What can we do about it?

Taking Reactive Extensions as an example, how is anyone going to learn about Reactive Extensions to such a degree that they can really use it productively, rather than just for experimentation?

As I see it there are currently five main ways of disseminating information:

User groups and conferences

These are good for getting people interested – but they’re not really great at deep dives. To be absolutely cynical about it, I think they’re better at entertaining than educating. To be slightly more positive, they can be good at inspiring people to look further for themselves… but unless you have a relatively long session or start off with developers who already have a fair amount of experience in the topic, I don’t think they’re a great way of imparting detailed information. (Organisers of such events, please note that this is in no way a condemnation of events in general. I’m still interested in speaking at them – I just question their ability to create experts.)

Training courses

I have little experience of training courses – it could be that they’re highly effective – but they clearly don’t scale in terms of getting information to a wide audience. As training companies make their money by getting people to the courses in person, they’re unlikely to provide videos of the training to let everyone take advantage of an individual tutor’s experience. We can hope for a network effect here and in some of the other options, of course – an expert trains a novice, the novice gains experience and then trains other novices in their company. There’s a risk of truth dilution, but at least this has the possibility of reaching those who would otherwise never voluntarily learn about new technologies.

Blogs

Blogs can be effective, but are difficult to navigate and often outdated. The "navigation" problem is one of picking the right course through the posts in order to try to get a well-rounded knowledge of the subject in an appropriate order.

To give a concrete example, a large amount of the information available in C# in Depth is covered in significantly more detail in Eric Lippert’s blog – which is great for those who already know enough to read it, but it doesn’t provide a 1, 2, 3 experience for learning C# 2-4. (Aside: Over time I’ve been coming to the conclusion that treating a subject "in the round" with a carefully considered ordering is the primary feature of tech books these days – you can almost always find more detailed, accurate and timely information online if you know what you’re looking for.)

I’ve been tremendously impressed at the Microsoft developer blog output in the last few years. There are many well-written, informative and enlightening blogs out there, which can certainly go a long way to filling an educational void. Right at this minute, it feels like they’re probably the most effective teaching tool in this list… but it somehow feels wrong to try to learn a new technology from scratch from a blog. It’s easy to get caught up in details and miss the big picture.

Is there a market for third parties collating blog posts into effective teaching material? I don’t just mean "link blogs" – but well-maintained and organised sequences of posts designed to teach particular topics… possibly with extra explanatory posts to provide extra cohesion. Is there sufficient coverage of basic topics in blogs, or are most bloggers aiming at developers who are already experienced in the topic in question? Answers on a postcard…

Books

Books have problems of freshness and detail, as mentioned before. There’s also a problem of readership – I’m not convinced that books have a wide enough reach these days. I suspect this is largely because blogs do a good enough job in many areas, and also because technology often moves faster than a book market can keep up. Of course this ends up being a cycle to some extent – with fewer and fewer readers, top authors will have more incentive to blog than to write books, so the overall quality could drop, leading to fewer book readers etc. I’m not going to pretend to know enough about the book market to make concrete predictions – but it does concern me.

(I’m not particularly bothered in terms of my own income – I’ve never been writing C# in Depth for the money, although of course that provides more incentive for "polish". I wouldn’t have put in as much editorial work in a purely amateur capacity. There’s a very strange motivational force going on when it comes to tech writing and publishing. That’s possibly a seed for another post some time.)

Will there be a book teaching how to think about push sequences and use Reactive Extensions effectively? Maybe. Heck, I’d potentially be interested in writing one – if only to understand it better myself. Would it reach all the people who could benefit from it though? I’m more dubious about that.

Documentation and tutorials

This is probably the most traditional and widespread route to knowledge of new technologies. Does it cut it? I simply don’t know. I can’t remember the last time I read docs in a "from scratch" mode – my experience is much more in reference mode. I’ve occasionally tried – but found either my patience or the documentation to be lacking. It’s entirely possible that my expectations and methods of learning have changed, and that that’s what’s gone wrong… but this may be a reasonably common phenomenon.

Again there’s a problem of reach – although it’s possible that as the most traditional form of learning, it’s possible that it has a greater reach into the non-community, if you see what I mean.

Non-conclusion

This has been somewhat rambly, partly because I’m demob-happy: I’m about to go to Athens on holiday with Holly for a few days. (Don’t expect any comments to be approved until Sunday.) It’s a real concern though, and one which goes way beyond the Microsoft technologies mentioned here.

The challenges faced in computing are growing, and so are the technologies trying to up us meet those challenges… but we need to grow too, in order to take advantage of what’s available.

I’m not pronouncing doom on the industry – but I’d like your thoughts on how we can keep up, and how we can help others to do likewise.

Just how lazy are you?

I’ve been reviewing chapter 10 of C# in Depth, which is about extension methods. This is where I start introducing some of the methods in System.Linq.Enumerable, such as Where and Reverse. I introduce a few pieces of terminology in callouts – and while I believe I’m using this terminology correctly according to MSDN, I suspect that some of it isn’t quite what many developers expect… in particular, what does it mean for something to be "lazy"?

Let’s start off with a question: is Enumerable.Reverse lazy? Just so we’re all on the same page, here are the interesting bits of the behaviour of Reverse:

  • The call to Reverse doesn’t fetch any items – it merely checks that you’ve not given it a null sequence, stores a reference to that sequence, and returns a new sequence.
  • Once the first item is returned from the returned sequence, all the items in the original sequence are fetched and buffered. Obviously this is required, as the first item in the reversed sequence is the last item in the original sequence.

So, is that lazy or not?

One simple definition of lazy

This morning I tweeted on the somewhat ambiguous notion of something being "lazy" – and the responses I received were along the lines of "it’s deferred execution". You could potentially sum up this notion of laziness as:

An operation is lazy if it defers work until it is required in order to return a result.

By that definition, Reverse is lazy. Assuming we don’t want to perform special optimisations for IList<T> (which change the exact behaviour), Reverse does as little work as it can – it just so happens that when the first item is requested, it has to drain the source sequence.

The MSDN definition of lazy

MSDN describes deferred execution, lazy evaluation and eager evaluation somewhat differently. Admittedly the page I’m taking these definitions from is in the context of LINQ to XML, but that effectively means it’s describing LINQ to Objects. It defines them like this:

Deferred execution means that the evaluation of an expression is delayed until its realized value is actually required.

In lazy evaluation, a single element of the source collection is processed during each call to the iterator. This is the typical way in which iterators are implemented.

In eager evaluation, the first call to the iterator will result in the entire collection being processed. A temporary copy of the source collection might also be required. For example, the OrderBy method has to sort the entire collection before it returns the first element.

Now, I take slight issue with the definition of lazy evaluation here as it specifies that a single element of the source collection is processed on each call. That’s fine for Cast, OfType, Select and a few other operators – but it would preclude Where, which might have to pull several source elements before it finds one which matches the specified filter. I still think of Where as being lazy.

My definition of lazy

Thinking about this more, I believe the following definition of laziness is helpful:

(This space left intentionally blank.)

I don’t believe lazy is a useful term, basically. It’s like "strong typing" – you get some sort of woolly feeling about how something will behave if it’s lazy, but chances are you’ll need something more precise anyway.

For the purposes of LINQ to Objects-style operators which return IEnumerable<T> or any interface derived from it (including IOrderedEnumerable<T>), I propose the following definitions:

An operation uses deferred execution if no elements are fetched from the source sequence until the first element is fetched from the result sequence. This applies to basically all LINQ to Objects operators which return a sequence interface. (It doesn’t apply to ToArray or ToList of course.)

An operation is streaming if it only fetches elements from the source sequence as it requires them, and does not store those elements internally unless otherwise specified.

An operation is buffering if it drains the source sequence (i.e. fetches all the elements) when the first element of the result sequence is fetched, and stores the items internally.

I’m not even entirely comfortable with this: you could claim that Reverse is "streaming but with internal storage" – but that’s against the spirit of the definitions. Why did I mention the internal storage at all? Well, consider Distinct… that streams data in that it will the result sequence will return the first element immediately after reading the first element from the source sequence, and so on – but it has to remember all the elements it’s already returned, for obvious reasons.

Some operations are buffering in one input and streaming in another – for example, Join will read all of the "inner" sequence as soon as it’s asked for its first element, but streams the "outer" sequence.

Why does this matter?

Is this just another example of my pedantry and desire for precision? Not really. Consider my old favourite example of LINQ: processing huge log files. Suppose each log entry contains a user ID, and we’ve got a handy log reader which will iterate over all the log files, yielding one log entry at a time.

  • Using entries.Reverse() would be disastrous if we ever actually used the result. We really, really don’t want to load everything into memory.
  • Using entries.Select(x => x.UserId) would be fine, so long as we used the result without buffering it ourselves.
  • Using entries.Select(x => x.UserId).Distinct() might be fine – it would depend on how many users we have. If you’re processing some company-internal application logs, that’s probably okay even if you’ve generated a huge number of log entries. If you’re processing FaceBook logs, you could have more of a problem.

Basically, you need to know what an operation will do with its input before you can decide whether or not it’s usable for a particular situation.

The best answer for this (IMO) is to document any such methods meticulously. Yes, you then lose the flexibility of changing the behaviour at a later date – but at least you’ve then provided something that can be used with confidence.

Note that Reactive Extensions has a similar problem, but in a slightly different form – in that case, the distinction between "hot" and "cold" observables can make a big difference, along with scheduling etc. Again, documentation is the key in my view.

Optimisations in LINQ to Objects

(Edited on February 11th, 2010 to take account of a few mistakes and changes in the .NET 4.0 release candidate.)

I’ve just been fiddling with the first appendix of C# in Depth, which covers the standard query operators in LINQ, and describes a few details of the LINQ to Objects implementations. As well as specifying which operators stream and which buffer their results, and immediate vs deferred execution, I’ve where LINQ optimises for different collection types – or where it doesn’t, but could. I’m not talking about optimisations which require knowledge of the projection being applied or an overall picture of the query (e.g. seq.Reverse().Any() being optimised to seq.Any()) – this is just about optimisations which can be done on a pretty simple basis.

There are two main operations which can be easily optimised in LINQ: random access to an element by index, and the count of a collection. The tricky thing about optimisations like this is that we do have to make assumptions about implementation: I’m going to assume that any implementation of an interface with a Count property will be able to return the count very efficiently (almost certainly straight from a field) and likewise that random access via an indexer will be speedy. Given that both these operations already have their own LINQ operators (when I say "operator" in this blog post, I mean "LINQ query method" rather than an operator at the level of C# as a language) let’s look at those first.

Count

Count() should be pretty straightforward. Just to be clear, I’m only talking about the overload which doesn’t take a predicate: it’s pretty hard to see how you can do better than iterating through and counting matches when there is a predicate involved.

There are actually two common interfaces which declare a Count property: ICollection<T> and ICollection. While many implemenations of ICollection<T> will also implement ICollection (including List<T> and T[]), it’s not guaranteed: they’re independent interfaces, unrelated other than by the fact that they both extend IEnumerable.

The MSDN documentation for Enumerable.Count() states:

If the type of source implements ICollection<T>, that implementation is used to obtain the count of elements. Otherwise, this method determines the count.

This is accurate for .NET 3.5, but in .NET 4.0 it does optimise for ICollection as well. (In beta 2 it only optimised for ICollection, skipping the generic interface.)

ElementAt

The equivalent of an indexer in LINQ is the ElementAt operator. Note that it can’t really be an indexer as there’s no such thing as an "extension indexer" which is arguably a bit of a pity, but off-topic. Anyway, the obvious interface to look for here is IList<T>… and that’s exactly what ElementAt does. It ignores the possibility that you’ve only implemented the nongeneric IList – but I think that’s fairly reasonable. After all, the extension method extends IEnumerable<T>, so your collection has to be aware of generics – why would you implement IList but not IList<T>? Also, using the implementation of IList would involve a conversion from object to T, which would at the very least be ugly.

So ElementAt doesn’t actually do too badly. Now that we’ve got the core operations, what else could be optimised?

SequenceEqual

If you were going to write a method to compare two lists, you might end up with something like this (ignoring nullity concerns for the sake of brevity):

public static bool ListEqual<T>(List<T> first, List<T> second, IEqualityComparer<T> comparer)
{
    // Optimise for reflexive comparison
    if (first == second)
    {
        return true;
    }
    // If the counts are different we know the lists are different
    if (first.Count != second.Count)
    {
        return false;
    }
    // Compare each pair of elements in turn
    for (int i = 0; i < first.Count; i++)
    {
        if (!comparer.Equals(first[i], second[i]))
        {
            return false;
        }
    }
    return true;
}

Note the two separate optimisations. The first is always applicable, unless you really want to deal with sequences which will yield different results if you call GetEnumerator() on them twice. You could certainly argue that that would be a legitimate implementation, but I’d be interested to see a situation in which it made sense to try to compare such a sequence with itself and return false. SequenceEqual perform this optimisation.

The second optimisation – checking for different counts – is only really applicable in the case where we know that Count is optimised for both lists. In particular, I always make a point of only iterating through each source sequence once when I write a custom LINQ operator – you never know when you’ll be given a sequence which reads a huge log file from disk, yielding one line at a time. (Yes, that is my pet example, but it’s a good one and I’m sticking to it.) But we can certainly tell if both sequences implement ICollection or ICollection<T>, so it would make sense to have an "early negative" in that situation.

Last(predicate)

(All of this applies to LastOrDefault as well, by the way.) The implementation of Last which doesn’t take a predicate is already optimised for the IList<T> case: in that situation the method finds out the count, and returns list[count – 1] as you might expect. We certainly can’t do that when we’ve been given a predicate, as the last value might not match that predicate. However, we could walk backwards from the end of the list… if you have a list which contains a million items, and the last-but-one matches the predicate, you don’t really want to test the first 999998 items, do you? Again, this assumes that we can keep using random access on the list, but I think that’s reasonable for IList<T>.

Reverse

Reverse is an interesting case, because it uses deferred execution and streams data. In reality, it always takes a complete copy of the sequence (which in itself does optimise for the case where it implements ICollection<T>; in that situation you know the count to start with and can use source.CopyTo(destinationArray) to speed things up). You might consider an optimisation which uses random access if the source is an implementation of IList<T> – you could just lazily yield the elements in reverse order using random access. However, that would change behaviour. Admittedly the behaviour of Reverse may not be what people expect in the first place. What would you predict that this code does?

string[] array = { "a", "b", "c", "d" };

var query = array.Reverse();
array[0] = "a1";
        
var iterator = query.GetEnumerator();
array[0] = "a2";
        
// We’ll assume we know when this will stop :)
iterator.MoveNext();
Console.WriteLine(iterator.Current);
array[0] = "a3";
        
iterator.MoveNext();
Console.WriteLine(iterator.Current);
array[0] = "a4";

iterator.MoveNext();
Console.WriteLine(iterator.Current);
array[0] = "a5";
        
iterator.MoveNext();
Console.WriteLine(iterator.Current);

After careful thought, I accurately predicted the result (d, c, b, a2) – but you do need to take deferred execution *and* eager buffering into account. If nothing else, this should be a lesson in not changing the contents of query sources while you’re iterating over the query unless you’re really sure of what you’re doing.

With the candidate "optimisation" in place, we’d see (d, c, b, a5), but only when working on array directly. Working on array.Select(x => x) would have to give the original results, as it would have to iterate through all the initial values before finding the last one.

LongCount

This is an interesting one… LongCount really doesn’t make much sense unless you expect your sequence to have more than 2^31 elements, but there’s no optimisation present. The contract for IList<T> doesn’t state what Count should do if the list has more than Int32.MaxValue elements, so that can’t really be used – but potentially Array.LongValue could be used for large arrays.

A bigger question is when this would actually be useful. I haven’t tried timing Enumerable.Range(0, int.MaxValue) to see how long it would take to become relevant, but I suspect it would be a while. I can see how LongCount could be useful in LINQ to SQL – but does it even make sense in LINQ to Objects? Maybe it will be optimised in a future version with ILongList<T> for large lists…

EDIT: In fact, given comments, it sounds like the time taken to iterate over int.MaxValue items isn’t that high after all. That’ll teach me to make assumptions about running times without benchmarking… I still can’t say I’ve seen LongCount used in anger in LINQ to Objects, but it’s not quite as silly as I thought :)

Conclusion

The optimisations I’ve described here all have the potential to take a long-running operation down to almost instantaneous execution, in the "right" situation. There may well be other opportunities lurking – things I haven’t thought of. The good news is that missing optimisations could be applied in future releases without breaking any sane code. I do wonder whether supporting methods (e.g. TryFastCount and TryFastElementAt) would be useful to encourage other people writing their own LINQ operators to optimise appropriately – but that’s possibly a little too much of a niche case.

Blog post frequency may well change in either direction for the near future – I’m going to be very busy with last-minute changes, fixes, indexing etc for the book, which will give me even less time for blogging. On the other hand, it can be mind-numbingly tedious, so I may resort to blogging as a form of relief…

First encounters with Reactive Extensions

I’ve been researching Reactive Extensions for the last few days, with an eye to writing a short section in chapter 12 of the second edition of C# in Depth. (This is the most radically changed chapter from the first edition; it will be covering LINQ to SQL, IQueryable, LINQ to XML, Parallel LINQ, Reactive Extensions, and writing your own LINQ to Objects operators.) I’ve watched various videos from Channel 9, but today was the first time I actually played with it. I’m half excited, and half disappointed.

My excited half sees that there’s an awful lot to experiment with, and loads to learn about join patterns etc. I’m also looking forward to trying genuine events (mouse movements etc) – so far my tests have been to do with collections.

My disappointed half thinks it’s missing something. You see, Reactive Extensions shares some concepts with my own Push LINQ library… except it’s had smarter people (no offense meant to Marc Gravell) working harder on it for longer. I’d expect it to be easier to use, and make it a breeze to do anything you could do in Push LINQ. Unfortunately, that’s not quite the case.

Subscription model

First, the way that subscription is handled for collections seems slightly odd. I’ve been imagining two kinds of observable sources:

  • Genuine "event streams" which occur somewhat naturally – for instance, mouse movement events. Subscribing to such an observable wouldn’t do anything to it other than adding subscribers.
  • Collections (and the like) where the usual use case is "set up the data pipeline, then tell it to go". In that case calling Subscribe should just add the relevant observers, but not actually "start" the sequence – after all, you may want to add more observers (we’ll see an example of this in a minute).

In the latter case, I could imagine an extension method to IEnumerable<T> called ToObservable which would return a StartableObservable<T> or something like that – you’d subscribe what you want, and then call Start on the StartableObservable<T>. That’s not what appears to happen though – if you call ToObservable(), you get an implementation which iterates over the source sequence as soon as anything subscribes to it – which just doesn’t feel right to me. Admittedly it makes life easy in the case where that’s really all you want to do, but it’s a pain otherwise.

There’s a way of working round this in Reactive Extensions: there’s Subject<T> which is both an observer and an observable. You can create a Subject<T>, Subscribe all the observers you want (so as to set up the data pipeline) and then subscribe the subject to the real data source. It’s not exactly hard, but it took me a while to work out, and it feels a little unwieldy. The next issue was somewhat more problematic.

Blocking aggregation

When I first started thinking about Push LINQ, it was motivated by a scenario from the C# newsgroup: someone wanted to group a collection in a particular way, and then count how many items were in each group. This is effectively the "favourite colour voting" scenario outlined in the link at the top of this post. The problem to understand is that the normal Count() call is blocking: it fetches items from a collection until there aren’t any more; it’s in control of the execution flow, effectively. That means if you call it in a grouping construct, the whole group has to be available before you call Count(). So, you can’t stream an enormous data set, which is unfortunate.

In Push LINQ, I addressed this by making Count() return Future<int> instead of int. The whole query is evaluated, and then you can ask each future for its actual result. Unfortunately, that isn’t the approach that the Reactive Framework has taken – it still returns int from Count(). I don’t know the reason for this, but fortunately it’s somewhat fixable. We can’t change Observable of course, but we can add our own future-based extensions:

public static class ObservableEx
{
    public static Task<TResult> FutureAggregate<TSource, TResult>
        (this IObservable<TSource> source,
        TResult seed, Func<TResult, TSource, TResult> aggregation)
    {
        TaskCompletionSource<TResult> result = new TaskCompletionSource<TResult>();
        TResult current = seed;
        source.Subscribe(value => current = aggregation(current, value),
            error => result.SetException(error),
            () => result.SetResult(current));
        return result.Task;
    }

    public static Task<int> FutureMax(this IObservable<int> source)
    {
        // TODO: Make this generic and throw exception on
        // empty sequence. Left as an exercise for the reader.
        return source.FutureAggregate(int.MinValue, Math.Max);
    }

    public static Task<int> FutureMin(this IObservable<int> source)
    {
        // TODO: Make this generic and throw exception on
        // empty sequence. Left as an exercise for the reader.
        return source.FutureAggregate(int.MaxValue, Math.Min);
    }

    public static Task<int> FutureCount<T>(this IObservable<T> source)
    {
        return source.FutureAggregate(0, (count, _) => count + 1);
    }
}

This uses Task<T> from Parallel Extensions, which gives us an interesting ability, as we’ll see in a moment. It’s all fairly straightforward – TaskCompletionSource<T> makes it very easy to specify a value when we’ve finished, or indicate that an error occurred. As mentioned in the comments, the maximum/minimum implementations leave something to be desired, but it’s good enough for a blog post :)

Using the non-blocking aggregation operators

Now that we’ve got our extension methods, how can we use them? First I decided to do a demo which would count the number of lines in a file, and find the maximum and minimum line lengths:

public static List<T> ToList<T>(this IObservable<T> source)
{
    List<T> ret = new List<T>();
    source.Subscribe(x => ret.Add(x));
    return ret;
}
private static IEnumerable<string> ReadLines(string filename)
{
    using (TextReader reader = File.OpenText(filename))
    {
        string line;
        while ((line = reader.ReadLine()) != null)
        {
            yield return line;
        }
    }
}

var subject = new Subject<string>();
var lengths = subject.Select(line => line.Length);
var min = lengths.FutureMin();
var max = lengths.FutureMax();
var count = lengths.FutureCount();
            
var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);
Console.WriteLine("Count: {0}, Min: {1}, Max: {2}",
                  count.Result, min.Result, max.Result);

As you can see, we use the Result property of a task to find its eventual result – this call will block until the result is ready, however, so you do need to be careful about how you use it. Each line is only read from the file once, and pushed to all three observers, who carry their state around until the sequence is complete, whereupon they publish the result to the task.

I got this working fairly quickly – then went back to the "grouping lines by line length" problem I’d originally set myself. I want to group the lines of a file by their length (all lines of length 0, all lines of length 1 etc) and count each group. The result is effectively a histogram of line lengths. Constructing the query itself wasn’t a problem – but iterating through the results was. Fundamentally, I don’t understand the details of ToEnumerable yet, particularly the timing. I need to look into it more deeply, but I’ve got two alternative solutions for the moment.

The first is to implement my own ToList extension method. This simply creates a list and subscribes an observer which adds items to the list as it goes. There’s no attempt at "safety" here – if you access the list before the source sequence has completed, you’ll see whatever has been added so far. I am still just experimenting :) Here’s the implementation:

public static List<T> ToList<T>(this IObservable<T> source)
{
    List<T> ret = new List<T>();
    source.Subscribe(x => ret.Add(x));
    return ret;
}

Now we can construct a query expression, project each group using our future count, make sure we’ve finished pushing the source before we read the results, and everything is fine:

var subject = new Subject<string>();
var groups = from line in subject
             group line.Length by line.Length into grouped
             select new { Length = grouped.Key, Count = grouped.FutureCount() };
var results = groups.ToList();

var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);
foreach (var group in results)
{
    Console.WriteLine("Length: {0}; Count: {1}", group.Length, group.Count.Result);
}

Note how the call to ToList is required before calling source.ToObservable(...).Subscribe – otherwise everything would have been pushed before we started collecting it.

All well and good… but there’s another way of doing it too. We’ve only got a single task being produced for each group – instead of waiting until everything’s finished before we dump the results to the console, we can use Task.ContinueWith to write it (the individual group result) out as soon as that group has been told that it’s finished. We force this extra action to occur on the same thread as the observer just to make things easier in a console app… but it all works very neatly:

var subject = new Subject<string>();
var groups = from line in subject
             group line.Length by line.Length into grouped
             select new { Length = grouped.Key, Count = grouped.FutureCount() };
                                    
groups.Subscribe(group =>
{
    group.Count.ContinueWith(
         x => Console.WriteLine("Length: {0}; Count: {1}"
                                group.Length, x.Result),
         TaskContinuationOptions.ExecuteSynchronously);
});
var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);

Conclusion

That’s the lot, so far. It feels like I’m sort of in the spirit of Reactive Extensions, but that maybe I’m pushing it (no pun intended) in a direction which Erik and Wes either didn’t anticipate, or at least don’t view as particularly valuable/elegant. I very much doubt that they didn’t consider deferred aggregates – it’s much more likely that either I’ve missed some easy way of doing this, or there are good reasons why it’s a bad idea. I hope to find out which at some point… but in the meantime, I really ought to work out a more idiomatic example for C# in Depth.

Where do you benefit from dynamic typing?

Disclaimer: I don’t want this to become a flame war in the comments. I’m coming from a position of ignorance, and well aware of it. While I’d like this post to provoke thought, it’s not meant to be provocative in the common use of the term.

Chapter 14 of C# in Depth is about dynamic typing in C#. A couple of reviewers have justifiably said that I’m fairly keen on the mantra of "don’t use dynamic typing unless you need it" – and that possibly I’m doing dynamic typing a disservice by not pointing out more of its positive aspects. I completely agree, and I’d love to be more positive – but the problem is that I’m not (yet) convinced about why dynamic typing is something I would want to embrace.

Now I want to start off by making something clear: this is meant to be about dynamic typing. Often advocates for dynamically typed languages will mention:

  • REPL (read-eval-print-loop) abilities which allow for a very fast feedback loop while experimenting
  • Terseness – the lack of type names everywhere makes code shorter
  • Code evaluated at execution time (so config files can be scripts etc)

I don’t count any of these as benefits of dynamic typing per se. They’re benefits which often come alongside dynamic typing, but they’re not dependent on dynamic typing. The terseness argument is the one most closely tied to their dynamic nature, but various languages with powerful type inference show that being statically typed doesn’t mean having to specify type names everywhere. (C#’s var keyword is a very restricted form of type inference, compared with – say – that of F#.)

What I’m talking about is binding being performed at execution time and only at execution time. That allows for:

  • Duck typing
  • Dynamic reaction to previously undeclared messages
  • Other parts of dynamic typing I’m unaware of (how could there not be any?)

What I’m interested in is how often these are used within real world (rather than demonstration) code. It may well be that I’m suffering from Blub’s paradox – that I can’t see the valid uses of these features simply because I haven’t used them enough. Just to be clear, I’m not saying that I never encounter problems where I would welcome dynamic typing – but I don’t run into them every day, whereas I get help from the compiler every day.

Just as an indicator of how set in my statically typed ways I am, at the recent Stack Overflow DevDays event in London, Michael Sparks went through Peter Norvig’s spelling corrector. It’s a neat piece of code (and yes, I’ll finish that port some time) but I kept finding it hard to understand simply because the types weren’t spelled out. Terseness can certainly be beneficial, but in this case I would personally have found it simpler if the variable and method types had been explicitly declared.

So, for the dynamic typing fans (and I’m sure several readers will come into that category):

  • How often do you take advantage of dynamic typing in a way that really wouldn’t be feasible (or would be very clunky) in a statically typed language?
  • Is it usually the same single problem which crops up regularly, or do you find a wide variety of problems benefit from dynamic typing?
  • When you declare a variable (or first assign a value to a variable, if your language doesn’t use explicit declarations) how often do you really either not know its type or want to use some aspect of it which wouldn’t typically have been available in a statically typed environment?
  • What balance do you find in your use of duck typing (the same method/member/message has already been declared on multiple types, but there’s no common type or interface) vs truly dynamic reaction based on introspection of the message within code (e.g. building a query based on the name of the method, such as FindBooksByAuthor("Josh Bloch"))?
  • What aspects of dynamic typing do I appear to be completely unaware of?

Hopefully someone will be able to turn the light bulb on for me, so I can be more genuinely enthusiastic about dynamic typing, and perhaps even diversify from my comfort zone of C#…

Contract classes and nested types within interfaces

I’ve just been going through some feedback for the draft copy of the second edition of C# in Depth. In the contracts section, I have an example like this:

[ContractClass(typeof(ICaseConverterContracts))]
public interface ICaseConverter
{
    string Convert(string text);
}

[ContractClassFor(typeof(ICaseConverter))]
internal class ICaseConverterContracts : ICaseConverter
{
    string ICaseConverter.Convert(string text)
    {
        Contract.Requires(text != null);
        Contract.Ensures(Contract.Result<string>() != null);
        return default(string);
    }

    private ICaseConverterContracts() {}
}

public class InvariantUpperCaseFormatter : ICaseConverter
{
    public string Convert(string text) 
    {
        return text.ToUpperInvariant();
    }
}

The point is to demonstrate how contracts can be specified for interfaces, and then applied automatically to implementations. In this case, ICaseConverter is the interface, ICaseConverterContracts is the contract class which specifies the contract for the interface, and InvariantUpperCaseFormatter is the real implementation. The binary rewriter effectively copies the contract into each implementation, so you don’t need to duplicate the contract in the source code.

The reader feedback asked where the contract class code should live – should it go in the same file as the interface itself, or in a separate file as normal? Now normally, I’m firmly of the "one top-level type per file" persuasion, but in this case I think it makes sense to keep the contract class with the interface. It has no meaning without reference to the interface, after all – it’s not a real implementation to be used in the normal way. It’s essentially metadata. This does, however, leave me feeling a little bit dirty. What I’d really like to be able to do is nest the contract class inside the interface, just like I do with other classes which are tightly coupled to an "owner" type. Then the code would look like this:

[ContractClass(typeof(ICaseConverterContracts))]
public interface ICaseConverter
{
    string Convert(string text);

    [ContractClassFor(typeof(ICaseConverter))]
    internal class ICaseConverterContracts : ICaseConverter
    {
        string ICaseConverter.Convert(string text)
        {
            Contract.Requires(text != null);
            Contract.Ensures(Contract.Result<string>() != null);
            return default(string);
        }

        private ICaseConverterContracts() {}
    }
}

public class InvariantUpperCaseFormatter : ICaseConverter
{
    public string Convert(string text) 
    {
        return text.ToUpperInvariant();
    }
}

That would make me feel happier – all the information to do with the interface would be specified within the interface type’s code. It’s possible that with that as a convention, the Code Contracts tooling could cope without the attributes – if interface IFoo contains a nested class IFooContracts which implements IFoo, assume it’s a contract class and handle it appropriately. That would be sweet.

You know the really galling thing? I’m pretty sure VB does allow nested types in interfaces…

Generic collections – relegate to an appendix?

(I tweeted a brief version of this suggestion and the results have been overwhelmingly positive so far, but I thought it would be worth fleshing out anyway.)

I’m currently editing chapter 3 of C# in Depth. In the first edition, it’s nearly 48 pages long – the longest in the book, and longer than I want it to be.

One of the sections in there (only 6 pages, admittedly) is a description of various .NET 2.0 collections. However, it’s mostly comparing them with the nongeneric collections from .NET 1.0, which probably isn’t relevant any more. I suspect my readership has now moved on from "I only know C# 1" to "I’ve used C# 2 and I’m reasonably familiar with the framework, but I want to know the details of the language."

I propose moving the collections into an appendix. This will mean:

  • I’ll cover all versions of .NET, not just 2.0
  • It will all be done in a fairly summary form, like the current appendix. (An appendix doesn’t need as much of a narrative structure as a main chapter, IMO.)
  • I’ll cover the interfaces as well as the classes – possibly even with pictures (type hierarchies)!
  • Chapter 3 can be a bit slimmer (although I’ve been adding a little bit here and there, so I’m not going to save a massive amount)
  • It will be easier to find as a quick reference (and I’ll write it in a way which makes it easy to use as a reference too, hopefully)
  • I don’t have to edit it right now :)

Does this sound like a plan? I don’t know why I didn’t think of it before, but I think it’s the right move. In particular, it’s in-keeping with the LINQ operator coverage in the existing appendix.

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

The “dream book” for C# and .NET

This morning I showed my hand a little on Twitter. I’ve had a dream for a long time about the ultimate C# book. It’s a dream based on Effective Java, which is my favourite Java book, along with my experiences of writing C# in Depth.

Effective Java is written by Josh Bloch, who is an absolute giant in the Java world… and that’s both the problem and the opportunity. There’s no-one of quite the equivalent stature in the .NET world. Instead, there are many very smart people, a lot of whom blog and some of whom have their own books.

There are "best practices" books, of course: Microsoft’s own Framework Design Guidelines, and Bill Wagner’s Effective C# and More Effective C# being the most obvious examples. I’m in no way trying to knock these books, but I feel we could do even better. The Framework Design Guidelines (also available free to browse on MSDN) are really about how to create a good API – which is important, but not the be-all-and-end-all for many application developers who aren’t trying to ship a reusable class library and may well have different concerns. They want to know how to use the language most effectively, as well as the core types within the framework.

Bill’s books – and many others which cover the core framework, such as CLR via C#, Accelerated C# 2008 and C# 3.0 in a Nutshell – give plenty of advice, but often I’ve felt it’s a little one-sided. Each of these books is the work of a single person (or brothers in the case of Nutshell). Reading them, I’ve often wanted to give present a different point of view – or alternatively, to give a hearty "hear, hear." I believe that a book giving guidance would benefit greatly from being more of a conversation: where the authors all agree on something, that’s great; where they differ, it would be good to hear about the pros and cons of various approaches. The reader can then weigh up those factors as they apply to each particular real-world scenario.

Scope

So what would such a book contain? Opinions will vary of course, but I would like to see:

  • Effective ways of using language features such as lambda expressions, generic type inference (and indeed generics in general), optional parameters, named arguments and extension methods. Assume that the reader knows roughly what C# does, but give some extra details around things like iterator blocks and anonymous functions.
  • Guidance around class design (in a similar fashion to the FDG, but with more input from others in the community)
  • Core framework topics (again, assume the basics are understood):
    • Resource management (disposal etc)
    • Exceptions
    • Collections (including LINQ fundamentals)
    • Streams
    • Text (including internationalization)
    • Numeric types
    • Time-related APIs
    • Concurrency
    • Contracts
    • AppDomains
    • Security
    • Performance

I would prefer to avoid anything around the periphery of .NET (WPF, WinForms, ASP.NET, WCF) – I believe those are better handled in different topics.

Obstacles and format

There’s one big problem with this idea, but I think it may be a saving grace too. Many of the leading authors work for different publishers. Clearly no single publisher is going to attract all the best minds in the C# and .NET world. So how could this work in practice? Well…

Imagine a web site for the book, paid for jointly by all interested publishers. The web site would be the foremost delivery mechanism for the content, both to browse and probably to download in formats appropriate for offline reading (PDF etc). The content would be edited in a collaborative style obviously, but exactly how that would work is a detail to be thrashed out. If you’ve read the annotated C# or CLI specifications, they have about the right feel – opinions can be attributed in places, but not everything has a label.

Any contributing publisher could also take the material and publish it as hard copy if they so wished. Quite how this would work – with potentially multiple hard copy editions of the same content – would be interesting to see. There’s another reason against hard copy ever appearing though, which is that it would be immovable. I’d like to see this work evolve as new features appear and as more best practices are discovered. Publishers could monetize the web site via adverts, possibly according to how much they’re kicking into the site.

I don’t know how the authors would get paid, admittedly, and that’s another problem. Would this cannibalize the sales of the books listed earlier? It wouldn’t make them redundant – certainly not for the Nutshell type of book, which teaches the basics as well as giving guidance. It would hit Effective C# harder, I suspect – and I apologise to Bill Wagner in advance; if this ever takes off and it hurts his bottom line, I’m very sorry – I think it’s in a good cause though.

Dream Team

So who would contribute to this? Part of me would like to say "anyone and everyone" in a Wikipedia kind of approach – but I think that practically, it makes sense for industry experts to take their places. (A good feedback/comments mechanism for anyone to use would be crucial, however.) Here’s a list which isn’t meant to be exhaustive, but would make me happy – please don’t take offence if your name isn’t on here but should be, and I wouldn’t expect all of these people to be interested anyway.

  • Anders Hejlsberg
  • Eric Lippert
  • Mads Torgersen
  • Don Box
  • Brad Abrams
  • Krzysztof Cwalina
  • Joe Duffy
  • Vance Morrison
  • Rico Mariani
  • Erik Meijer
  • Don Symes
  • Wes Dyer
  • Jeff Richter
  • Joe and Ben Albahari
  • Andrew Troelsen
  • Bill Wagner
  • Trey Nash
  • Mark Michaelis
  • Jon Skeet (yeah, I want to contribute if I can)

I imagine "principal" authors for specific topics (e.g. Joe Duffy for concurrency) but with all the authors dropping in comments in other places too.

Dream or reality?

I have no idea whether this will ever happen or not. I’d dearly love it to, and I’ve spoken to a few people before today who’ve been encouraging about the idea. I haven’t been putting any work into getting it off the ground – don’t worry, it’s not been delaying the second edition of C# in Depth. One day though, one day…

Am I being hopelessly naïve to even consider such a venture? Is the scope too broad? Is the content valuable but not money-making? We’ll see.