All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Propagating multiple async exceptions (or not)

In an earlier post, I mentioned  that in the CTP, an asynchronous method will throw away anything other than the first exception in an AggregateException thrown by one of the tasks it’s waiting for. Reading the TAP documentation, it seems this is partly expected behaviour and partly not. TAP claims (in a section about how "await" is achieved by the compiler):

It is possible for a Task to fault due to multiple exceptions, in which case only one of these exceptions will be propagated; however, the Task’s Exception property will return an AggregateException containing all of the errors.

Unfortunately, that appears not to be the case. Here’s a test program demonstrating the difference between an async method and a somewhat-similar manually written method. The full code is slightly long, but here are the important methods:

static async Task ThrowMultipleAsync()
{
    Task t1 = DelayedThrow(500);
    Task t2 = DelayedThrow(1000);
    await TaskEx.WhenAll(t1, t2);
}

static Task ThrowMultipleManually()
{
    Task t1 = DelayedThrow(500);
    Task t2 = DelayedThrow(1000);
    return TaskEx.WhenAll(t1, t2);
}

static Task DelayedThrow(int delayMillis)
{
    return TaskEx.Run(delegate {
        Thread.Sleep(delayMillis);
        throw new Exception("Went bang after " + delayMillis);
    });
}

The difference is that the async method is generating an extra task, instead of returning the task from TaskEx.WhenAll. It’s waiting for the result of WhenAll itself (via EndAwait). The results show one exception being swallowed:

Waiting for From async method
Thrown exception: 1 error(s):
Went bang after 500

Task exception: 1 error(s):
Went bang after 500

Waiting for From manual method
Thrown exception: 2 error(s):
Went bang after 500
Went bang after 1000

Task exception: 2 error(s):
Went bang after 500
Went bang after 1000

The fact that the "manual" method still shows two exceptions means we can’t blame WhenAll – it must be something to do with the async code. Given the description in the TAP documentation, I’d expect (although not desire) the thrown exception to just be a single exception, but the returned task’s exception should have both in there. That’s clearly not the case at the moment.

Waiter! There’s an exception in my soup!

I can think of one reason why we’d perhaps want to trim down the exception to a single one: if we wanted to remove the aggregation aspect entirely. Given that the async method always returns a Task (or void), I can’t see how that’s feasible anyway… a Task will always throw an AggregateException if its underlying operation fails. If it’s already throwing an AggregateException, why restrict it to just one?

My guess is that this makes it easier to avoid the situation where one AggregateException would contain another, which would contain another, etc.

To demonstrate this, let’s try to write our own awaiting mechanism, instead of using the one built into the async CTP. GetAwaiter() is an extension method, so we can just make our own extension method which has priority over the original one. I’ll go into more detail about that in another post, but here’s the code:

public static class TaskExtensions
{
    public static NaiveAwaiter GetAwaiter(this Task task)
    {
        return new NaiveAwaiter(task);
    }
}

public class NaiveAwaiter
{
    private readonly Task task;

    public NaiveAwaiter(Task task)
    {
        this.task = task;
    }

    public bool BeginAwait(Action continuation)
    {
        if (task.IsCompleted)
        {
            return false;
        }
        task.ContinueWith(_ => continuation());
        return true;
    }

    public void EndAwait()
    {
        task.Wait();
    }
}

Yes, it’s almost the simplest implementation you could come up with. (Hey, we do check whether the task is already completed…) There no scheduler or SynchronizationContext magic… and importantly, EndAwait does nothing with any exceptions. If the task throws an AggregateException when we wait for it, that exception is propagated to the generated code responsible for the async method.

So, what happens if we run exactly the same client code with these classes present? Well, the results for the first part are different:

Waiting for From async method
Thrown exception: 1 error(s):
One or more errors occurred.

Task exception: 1 error(s):
One or more errors occurred.

We have to change the formatting somewhat to see exactly what’s going on – because we now have an AggregateException containing an AggregateException. The previous formatting code simply printed out how many exceptions there were, and their messages. That wasn’t an issue because we immediately got to the exceptions we were throwing. Now we’ve got an actual tree. Just printing out the exception itself results in huge gobbets of text which are unreadable, so here’s a quick and dirty hack to provide a bit more formatting:

static string FormatAggregate(AggregateException e)
{
    StringBuilder builder = new StringBuilder();
    FormatAggregate(e, builder, 0);
    return builder.ToString();
}

static void FormatAggregate(AggregateException e, StringBuilder builder, int level)
{
    string padding = new string(‘ ‘, level);
    builder.AppendFormat("{0}AggregateException with {1} nested exception(s):", padding, e.InnerExceptions.Count);
    builder.AppendLine();
    foreach (Exception nested in e.InnerExceptions)
    {
        AggregateException nestedAggregate = nested as AggregateException;
        if (nestedAggregate != null)
        {
            FormatAggregate(nestedAggregate, builder, level + 1);
            builder.AppendLine();
        }
        else
        {
            builder.AppendFormat("{0} {1}: {2}", padding, nested.GetType().Name, nested.Message);
            builder.AppendLine();
        }
    }
}

Now we can see what’s going on better:

AggregateException with 1 nested exception(s):
AggregateException with 2 nested exception(s):
  Exception: Went bang after 500
  Exception: Went bang after 1000

Hooray – we actually have all our exceptions, eventually… but they’re nested. Now if we introduce another level of nesting – for example by creating an async method which just waits on the task created by ThrowMultipleAsync – we end up with something like this:

AggregateException with 1 nested exception(s):
AggregateException with 1 nested exception(s):
  AggregateException with 2 nested exception(s):
   Exception: Went bang after 500
   Exception: Went bang after 1000

You can imagine that for a deep stack trace of async methods, this could get messy really quickly.

However, I don’t think that losing the information is really the answer. There’s already the Flatten method in AggregateException which will flatten the tree appropriately. I’d be reasonably happy for the exceptions to be flattened at any stage, but I really don’t like the behaviour of losing them.

It does get complicated by how the async language feature has to handle exceptions, however. Only one exception can ever be thrown at a time, even though a task can have multiple exceptions set on it. One option would be for the autogenerated code to handle AggregateException differently, setting all the nested exceptions separately (in the single task which has been returned) rather than either setting the AggregateException which causes nesting (as we’ve seen above) or relying on the awaiter picking just one exception (as is currently the case). It’s definitely a decision I think the community should get involved with.

Conclusion

As we’ve seen, the current behaviour of async methods doesn’t match the TAP documentation or what I’d personally like.

This isn’t down to the language features, but it’s the default behaviour of the extension methods which provide the "awaiter" for Task. That doesn’t mean the language aspect can’t be changed, however – some responsibility could be moved from awaiters to the generated code. I’m sure there are pros and cons each way – but I don’t think losing information is the right approach.

Next up: using extension method resolution rules to add diagnostics to task awaiters.

Configuring waiting

One of the great things about working at Google is that almost all of my colleagues are smarter than me. True, they don’t generally know as much about C#, but they know about language design, and they sure as heck know about distributed/parallel/async computing.

One of the great things about having occasional contact with the C# team is that when Mads Torgersen visits London later in the week, I can introduce him to these smart colleagues. So, I’ve been spreading the word about C# 5’s async support and generally encouraging folks to think about the current proposal so they can give feedback to Mads.

One particularly insightful colleague has persistently expressed a deep concern over who gets to control how the asynchronous execution works. This afternoon, I found some extra information which looks like it hasn’t been covered much so far which may allay his fears somewhat. It’s detailed in the Task-based Asynchronous Pattern documentation, which I strongly recommend you download and read right now.

More than ever, this post is based on documentation rather than experimentation. Please take with an appropriately large grain of salt.

What’s the problem?

In a complex server handling multiple types of request and processing them asynchronously – with some local CPU-bound tasks and other IO-bound tasks – you may well not want everything to be treated equally. Some operations (health monitoring, for example) may require high priority and a dedicated thread pool, some may be latency sensitive but support load balancing easily (so it’s fine to have a small pool of medium/high priority tasks, trusting load balancing to avoid overloading the server) and some may be latency-insensitive but be server-specific – a pool of low-priority threads with a large queue may be suitable here, perhaps.

If all of this is going to work, you need to know for each asynchronous operation:

  • Whether it will take up a thread
  • What thread will be chosen, if one is required (a new one? one from a thread pool? which pool?)
  • Where the continuation will run (on the thread which initiated the asynchronous operation? the thread the asynchronous operation ran on? a thread from a particular pool?)

In many cases reusable low-level code doesn’t know this context… but in the async model, it’s that low-level code which is responsible for actually starting the task. How can we reconcile the two requirements?

Controlling execution flow from the top down

Putting the points above into the concrete context of the async features of C# 5:

  • When an async method is called, it will start on the caller’s thread
  • When it creates a task (possibly as the target of an await expression) that task has control over how it will execute
  • The awaiter created by an await expression has control (or at the very least significant influence) over how where the next part of the async method (the continuation) is executed
  • The caller gets to decide what they will do with the returned task (assuming there is one) – it may be the target of another await expression, or it may be used more directly without any further use of the new language features

Whether a task requires an extra thread really is pretty much up to the task. Either a task will be IO-bound, CPU-bound, or a mixture (perhaps IO-bound to fetch data, and then CPU-bound to process it). As far as I can tell, it’s assumed that IO-bound asynchronous tasks will all use IO completion ports, leaving no real choice available. On other platforms, there may be other choices – there may be multiple IO channels for example, some reserved for higher priority traffic than others. Although the TAP doesn’t explicitly call this out, I suspect that other platforms could create a similar concept of context to the one described below, but for IO-specific operations.

The two concepts that TAP appears to rely on (and I should be absolutely clear that I could be misreading things; I don’t know as much about the TPL that all of this is based on as I’d like) are a SynchronizationContext and a TaskScheduler. The exact difference between the two remains slightly hazy to me, as both give control over which thread delegates are executed on – but I get the feeling that SynchronizationContext is aimed at describing the thread you should return to for callbacks (continuations) and TaskScheduler is aimed at describing the thread you should run work on – whether that’s new work or getting back for a continuation. (In other words, TaskScheduler is more general than SynchronizationContext -  so you can use it for continuations, but you can also use it for other things.)

One vital point is that although these aren’t enforced, they are designed to be the easiest way to carry out work. If there are any places where that isn’t true, that probably represents an issue. For example, the TaskEx.Run method (which will be Task.Run eventually) always uses the default TaskScheduler rather than the current TaskScheduler – so tasks started in that way will always run on the system thread pool. I have doubts about that decision, although it fits in with the general approach of TPL to use a single thread pool.

If everything representing an async operation follows the TAP, it should make it to control how things are scheduled "from this point downwards" in async methods.

ConfigureAwait, SwitchTo, Yield

Various "plain static" and extension methods have been provided to make it easy to change your context within an async method.

SwitchTo allows you to change your context to the ThreadPool or a particular TaskScheduler or Dispatcher. You may not need to do any more work on a particular high priority thread until you’ve actually got your final result – so you’re happy with the continuations being executed either "inline" with the asynchronous tasks you’re executing, or on a random thread pool thread (perhaps from some specific pool).  This may also allow the new CPU-bound tasks to be scheduled appropriately too (I thought it did, but I’m no longer sure). Once you’ve got all your ducks in a row, then you can switch back for the final continuation which needs to provide the results on your original thread.

ConfigureAwait takes an existing task and returns a TaskAwaiter – essentially allowing you to control just the continuation part.

Yield does exactly what it sounds like – yields control temporarily, basically allowing for cooperative multitasking by allowing other work to make progress before continuing. I’m not sure that this one will be particularly useful, personally – it feels a little too much like Application.DoEvents. I dare say there are specialist uses though – in particular, it’s cleaner than Application.DoEvents because it really is yielding, rather than running the message pump in the current stack.

All of these are likely to be used in conjunction with await. For example (these are not expected to all be in the same method, of course!):

// Continue in custom context (may affect where CPU-bound tasks are run too)
await customScheduler.SwitchTo();

// Now get back to the dispatcher thread to manipulate the UI
await control.Dispatcher.SwitchTo();

var task = new WebClient().DownloadStringTaskAsync(url);
// Don’t bother continuing on this thread after downloading; we don’t
// care for the next bit.
await ConfigureAwait(task, flowContext: false);

foreach (Job job in jobs)
{
    // Do some work that has to be done in this thread
    job.Process();

    // Let someone else have a turn – we may have a lot to
    // get through.
    // This will be Task.Yield eventually
    await TaskEx.Yield();
}

Is this enough?

My gut feeling is that this will give enough control over the flow of the application if:

  • The defaults in TAP are chosen appropriately so that the easiest way of starting a computation is also an easily "top-down-configurable" one
  • The top-level application programmer pays attention to what they’re doing, and configures things appropriately
  • Each component programmer lower down pays attention to the TAP and doesn’t do silly things like starting arbitrary threads themselves

In other words, everyone has to play nicely. Is that feasible in a complex system? I suspect it has to be really. If you have any "rogue" elements they’ll manage to screw things up in any system which is flexible enough to meet real-world requirements.

My colleague’s concern is (I think – I may be misrepresenting him) largely that the language shouldn’t be neutral about how the task and continuation are executed. It should allow or even force the caller to provide context. That would make the context hard to ignore lower down. The route I believe Microsoft has chosen is to do this implicitly by propagating context through the "current" SynchronizationContext and TaskScheduler, in the belief that developers will honour them.

We’ll see.

Conclusion

A complex asynchronous system is like a concerto played by an orchestra. Each musician is responsible for keeping time, but they are given direction from the conductor. It only takes one viola player who wants to play fast and loud to ruin the whole effect – so everyone has to behave. How do you force the musicians to watch the conductor? How much do you trust them? How easy is it to conduct in the first place? These are the questions which are hard to judge from documentation, frankly. I’m currently optimistic that by the time C# 5 is actually released, the appropriate balance will have been struck, the default tempo will be appropriate, and we can all listen to some beautiful music. In the key of C#, of course.

Evil code – overload resolution workaround

Another quick break from asynchrony, because I can’t resist blogging about this thoroughly evil idea which came to me on the train.

Your task: to write three static methods such that this C# 4 code:

static void Main() 
{ 
    Foo<int>(); 
    Foo<string>(); 
    Foo<int?>(); 
}

resolves one call to each of them – and will act appropriately for any non-nullable value type, reference type, and nullable value type respectively.

You’re not allowed to change anything in the Main method above, and they have to just be methods – no tricks using delegate-type fields, for example. (I don’t know whether such tricks would help you or not, admittedly. I suspect not.) It can’t just call one method which then determines other methods to call at execution time – we want to resolve this at compile time.

If you want to try this for yourself, look away now. I’ve deliberately included an attempt which won’t work below, so that hopefully you won’t see the working solution accidentally.

The simple (but failed) attempt

You might initially want to try this:

class Test 
{ 
    static void Foo<T>() where T : class {} 

    static void Foo<T>() where T : struct {} 

    // Let's hope the compiler thinks this is "worse"
    // than the others because it has no constraints 
    static void Foo<T>() {} 

    static void Main() 
    { 
        Foo<int>(); 
        Foo<int?>(); 
        Foo<string>(); 
    }  
}

That’s no good at all. I wrote about why it’s no good in this very blog, last week. The compiler only checks generic constraints on the type parameters after overload resolution.

Fail.

First steps towards a solution

You may remember that the compiler does check that parameter types make sense when working out the candidate set. That gives us some hope… all we’ve got to do is propagate our desired constraints into parameters.

Ah… but we’re calling a method with no arguments. So there can’t be any parameters, right?

Wrong. We can have an optional parameter. Okay, now we’re getting somewhere. What type of parameter can we apply to force a parameter to only be valid if a generic type parameter T is a non-nullable type? The simplest option which occurs is to use Nullable – that has an appropriate constraint. So, we end up with a method like

static void Foo<T>(T? ignored = default(T?)) where T : struct {}

Okay, so that’s the first call sorted out – it will be valid for the above method, but neither of the others will.

What about the reference type parameter? That’s slightly trickier – I can’t think of any common generic types in the framework which require their type parameters to be reference types. There may be one, of course – I just can’t think of one offhand. Fortunately, it’s easy to declare such a type ourselves, and then use it in another method:

class ClassConstraint<T> where T : class {} 

static void Foo<T>(ClassConstraint<T> ignored = default(ClassConstraint<T>))
    where T : class {}

Great. Just one more to go. Unfortunately, there’s no constraint which only satisfies nullable value types… Hmm.

The awkwardness of nullable value types

We want to effectively say, “Use this method if neither of the other two work – but use the other methods in preference.” Now if we weren’t already using optional parameters, we could potentially do it that way – by introducing a single optional parameter, we could have a method which was still valid for the other calls, but would be deemed “worse” by overload resolution. Unfortunately, overload resolution takes a binary view of optional parameters: either the compiler is having to fill in some parameters itself, or it’s not. It doesn’t think that filling in two parameter is “worse” than only filling in one.

Luckily, there’s a way out… inheritance to the rescue! (It’s not often you’ll hear me say that.)

The compiler will always prefer applicable methods in a derived class to applicable methods in a base class, even if they’d otherwise be better. So we can write a parameterless method with no type constraints at all in a base class. We can even keep it as a private method, so long as we make the derived class a nested type within its own base class.

Final solution

This leads to the final code – this time with diagnostics to prove it works:

using System;
class Base 
{ 
    static void Foo<T>() 
    { 
        Console.WriteLine("nullable value type"); 
    }

    class Test : Base 
    { 
        static void Foo<T>(T? ignored = default(T?)) 
            where T : struct 
        { 
            Console.WriteLine("non-nullable value type"); 
        } 

        class ClassConstraint<T> where T : class {} 

        static void Foo<T>(ClassConstraint<T> ignored = default(ClassConstraint<T>))
            where T : class 
        { 
            Console.WriteLine("reference type"); 
        } 

        static void Main() 
        { 
            Foo<int>(); 
            Foo<string>(); 
            Foo<int?>(); 
        } 
    } 
}

And the output…

non-nullable value type 
reference type 
nullable value type

Conclusion

This is possibly the most horrible code I’ve ever written.

Please, please don’t use it in real life. Use different method names or something like that.

Still, it’s a fun little puzzle, isn’t it?

Dreaming of multiple tasks again… with occasional exceptions

Yesterday I wrote about waiting for multiple tasks to complete. We had three asynchronous tasks running in parallel, fetching a user’s settings, reputation and recent activity. Broadly speaking, there were two approaches. First we could use TaskEx.WhenAll (which will almost certainly be folded into the Task class for release):

var settingsTask = service.GetUserSettingsAsync(userId); 
var reputationTask = service.GetReputationAsync(userId); 
var activityTask = service.GetRecentActivityAsync(userId); 

await TaskEx.WhenAll(settingsTask, reputationTask, activityTask); 

UserSettings settings = settingsTask.Result; 
int reputation = reputationTask.Result; 
RecentActivity activity = activityTask.Result;

Second we could just wait for each result in turn:

var settingsTask = service.GetUserSettingsAsync(userId);  
var reputationTask = service.GetReputationAsync(userId);  
var activityTask = service.GetRecentActivityAsync(userId);  
      
UserSettings settings = await settingsTask; 
int reputation = await reputationTask; 
RecentActivity activity = await activityTask;

These look very similar, but actually they behave differently if any of the tasks fails:

  • In the first form we will always wait for all the tasks to complete; if the settings task fails within a millisecond but the recent activity task takes 5 minutes, we’ll be waiting 5 minutes. In the second form we only wait for one at a time, so if one task fails, we won’t wait for any currently-unawaited ones to complete. (Of course if the first two tasks both succeed and the last one fails, the total waiting time will be the same either way.)
  • In the first form we should probably get to find out about the errors from all the asynchronous tasks; in the second form we only see the errors from whichever task fails first.

The second point is interesting, because in fact it looks like the CTP will throw away all but the first inner exception of an aggregated exception thrown by a Task that’s being awaited. That feels like a mistake to me, but I don’t know whether it’s by design or just due to the implementation not being finished yet. I’m pretty sure this is the same bit of code (in EndAwait for Task and Task<T>) which makes sure that we don’t get multiple levels of AggregateException wrapping the original exception as it bubbles up. Personally I’d like to at least be able to find all the errors that occurred in an asynchronous operation. Occasionally, that would be useful…

… but actually, in most cases I’d really like to just abort the whole operation as soon as any task fails. I think we’re missing a method – something like WhenAllSuccessful. If any operation is cancelled or faulted, the whole lot should end up being cancelled – with that cancellation propagating down the potential tree of async tasks involved, ideally. Now I still haven’t investigated cancellation properly, but I believe that the cancellation tokens of Parallel Extensions should make this all possible. In many cases we really need success for all of the operations – and we would like to communicate any failures back to our caller as soon as possible.

Now I believe that we could write this now – somewhat inefficiently. We could keep a collection of tasks which still haven’t completed, and wait for any of them to complete. At that point, look for all the completed ones in the set (because two could complete at the same time) and see whether any of them have faulted or been cancelled. If so, cancel the remaining operations and rethrow the exception (aka set our own task as faulted). If we ever get to the stage where all the tasks have completed – successfully – we just return so that the results can be fetched.

My guess is that this could be written more efficiently by the PFX team though. I’m actually surprised that there isn’t anything in the framework that does this. That usually means that either it’s there and I’ve missed it, or it’s not there for some terribly good reason that I’m too dim to spot. Either way, I’d really like to know.

Of course, all of this could still be implemented as extension methods on tuples of tasks, if we ever get language support for tuples. Hint hint.

Conclusion

It’s often easy to concentrate on the success path and ignore possible failure in code. Asynchronous operations make this even more of a problem, as different things could be succeeding and failing at the same time.

If you do need to write code like the second option above, consider ordering the various "await" statements so that the expected time taken in the failure case is minimized. Always consider whether you really need all the results in all cases… or whether any failure is enough to mess up your whole operation.

Oh, and if you know the reason for the lack of something like WhenAllSuccessful, please enlighten me in the comments :)

C# in Depth 2nd edition: ebook available, but soon to be updated

Just a quick interrupt while I know many of you are awaiting more asynchronous fun…

Over the weekend, the ebook of C# in Depth 2nd edition went out – and a mistake was soon spotted. Figure 2.2 was accidentally replaced by figure 13.1. I’ve included it in the book’s errata but we’re hoping to issue another version of the ebook shortly. Fortunately this issue only affects the ebook version – the files shipped to the printer are correct. Speaking of which, I believe the book should come off the printing press some time this week, so it really won’t be much longer before you can all physically scribble in the margins.

We’re going to give it a couple of days to see if anything else is found (and I’m going to check all the figures to see if the same problem has manifested itself elsewhere) – but I expect we’ll be issuing the second "final" version of the ebook late this week.

EDIT: A lot of people have asked about an epub/mobi version of the ebook. I don’t have any dates on it, but I know it’s something Manning is keen on, and there’s a page declaring that all new releases will have an epub/mobi version. I’m not sure how that’s all going to pan out just yet, but rest assured that it’s important to me too.

Control flow redux: exceptions in asynchronous code

Warning: as ever, this is only the result of reading the spec and experimentation. I may well have misinterpreted everything. Eric Lippert has said that he’ll blog about exceptions soon, but I wanted to put down my thoughts first, partly to see the difference between what I’ve worked out and what the real story is.

So far, I’ve only covered "success" cases – where tasks complete without being cancelled or throwing exceptions. I’m leaving cancellation for another time, but let’s look at what happens when exceptions are thrown by async methods.

What happens when an async method throws an exception?

There are three types of async methods:

  • Ones that are declared to return void
  • Ones that are declared to return Task
  • Ones that are declared to return Task<T> for some T

The distinction between Task and Task<T> isn’t important in terms of exceptions. I’ll call async methods that return Task or Task<T> taskful methods, and ones that return void taskless methods. These aren’t official terms and they’re not even nice terms, but I don’t have any better ones for the moment.

It’s actually pretty easy to state what happens when an exception is thrown – but the ramifications are slightly more complicated:

  • If code in a taskless method throws an exception, the exception propagates up the stack
  • If code in a taskful method throws an exception, the exception is stored in the task, which transitions to the faulted state
    • If we’re still in the original context, the task is then returned to the caller
    • If we’re in a continuation, the method just returns

The inner bullet points are important here. At any time it’s executing, an async method is either still in its original context – i.e. the caller is one level up the stack – or it’s in a continuation, which takes the form of an Action delegate. In the latter case, we must have previously returned control to the caller, usually returning a task (in the "taskful method" case).

This means that if you call a taskful method, you should expect to be given a task without an exception being thrown. An exception may well be thrown if you wait for the result of that task (possibly via an await operation) but the method itself will complete normally. (Of course, there’s always the possibility that we’ll run out of memory while constructing the task, or other horrible situations. I think it’s fair to classify those as pathological and ignore them for most applications.)

A taskless method is much more dangerous: not only might it throw an exception to the original caller, but it might alternatively throw an exception to whatever calls the continuation. Note that it’s the awaiter that gets to determine that for any await operation… it may be an awaiter which uses the current SynchronizationContext for example, or it may be one which always calls the continuation on a new thread… or anything else you care to think of. In some cases, that may be enough to bring down the process. Maybe that’s what you want… or maybe not. It’s worth being aware of.

Here’s a trivial app to demonstrate the more common taskful behaviour – although it’s unusual in that we have an async method with no await statements:

using System;
using System.Threading.Tasks;

public class Test
{
    static void Main()
    {
        Task task = GoBangAsync();
        Console.WriteLine("Method completed normally");
        task.Wait();
    }
    
    static async Task GoBangAsync()
    {
        throw new Exception("Bang!");
    }
}

And here’s the result:

Method completed normally

Unhandled Exception: System.AggregateException: One or more errors occurred. 
        —> System.Exception: Bang!
   at Test.<GoBangAsync>d__0.MoveNext()
   — End of inner exception stack trace —
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Test.Main()

As you can see, the exception was only thrown when we waited for the asynchronous task to complete – and it was wrapped in an AggregateException which is the normal behaviour for tasks.

If an awaited task throws an exception, that is propagated to the async method which was awaiting it. You might expect this to result in an AggregateException wrapping the original AggregateException and so on, but it seems that something is smart enough to perform some unwrapping. I’m not sure what yet, but I’ll investigate further when I get more time. EDIT: I’m pretty sure it’s the EndAwait code used when you await a Task or Task<T>. There’s certainly no mention of AggregateException in the spec, so I don’t believe the compiler-generated code does any of this.

How eagerly can we validate arguments?

If you remember, iterator blocks have a bit of a usability problem when it comes to argument validation: because the iterator code is only run when the caller first starts iterating, it’s hard to get eager validation. You basically need to have one non-iterator-block method which validates the arguments, then calls the "real" implementation with known-to-be valid arguments. (If this doesn’t ring any bells, you might want to read this blog post, where I’m coming up with an implementation of LINQ’s Where method.)

We’re in a similar situation here, if we want arguments to be validated eagerly, causing an exception to be thrown directly to the caller. As an example of this, what would you expect this code to do? (Note that it doesn’t involve us writing any async methods at all.)

using System;
using System.Net;
using System.Threading.Tasks;

class Test
{
    static void Main()
    {
        Uri uri = null;
        Task<string> task = new WebClient().DownloadStringTaskAsync(uri);
    }
}

It could throw an exception eagerly, or it could set the exception into the return task. In many cases this will have a very similar effect – if you call DownloadStringTaskAsync as part of an await statement, for example. But it’s something you should be aware of anyway, as sometimes you may well want to call such methods outside the context of an async method.

In this particular case, the exception is thrown eagerly – so even though we’re not trying to wait on a task, the above program blows up. So, how could we achieve the same thing?

First let’s look at the code which wouldn’t work:

// This will only throw the exception when the caller waits
// on the returned task.
public async Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    // Real implementation goes here.
    return "Just a dummy implementation";
}

The problem is that we’re in an async method, so the compiler is writing code to catch any exceptions we throw, and propagate them through the task instead. We can get round this by using exactly the same trick as with iterator blocks – using a first non-async method which then calls an async method after validating the arguments:

public Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    return DownloadStringTaskAsyncImpl(uri);
}
    
public async Task<string> DownloadStringTaskAsyncImpl(Uri uri)
{
    // Real implementation goes here.
    return "Just a dummy implementation";
}

There’s a nicer solution though – because C# 5 allows us to make anonymous functions (anonymous methods or lambda expressions) asynchronous too. So we can create a delegate which will return a task, and then call it:

public Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    Func<Task<string>> taskBuilder = async delegate {
        // Real implementation goes here.
        return "Just a dummy implementation";
    };
    return taskBuilder();
}

This is slightly neater for methods which don’t need an awful lot of code. For more involved methods, it’s quite possibly worth using the "split the method in two" approach instead.

Conclusion

The general exception flow in asynchronous methods is actually reasonably straightforward – which is a good job, as normally error handling in asynchronous flows is a pain.

You need to be aware of the consequences of writing (or calling) a taskless asynchronous method… I expect that the vast majority of asynchronous methods and delegates will be taskful ones.

Finally, you also need to work out when you want exceptions to be thrown. If you want to perform argument validation, decide whether it should throw exceptions eagerly – and if so, use one of the patterns shown above. (I haven’t spent much time thinking about which approach is "better" yet – I generally like eager argument validation, but I also like the consistency of all errors being propagated through the task.)

Next up: dreaming of multiple possibly faulting tasks.

Dreaming of multiple tasks

I apologise in advance if this post ends up slightly rambly. Unlike the previous entries, this only partly deals with existing features. Instead, it considers how we might like to handle some situations, and where the language and type system thwarts us a little. Most of the code will be incomplete, as producing something realistic and complete is tricky.

Almost everything I’ve written about so far has dealt with the situation where we just want to execute one asynchronous operation at a time. That’s the case which the await keyword makes particularly easy to work with. That’s certainly a useful case, but it’s not the only one. I’m not going to write about that case in this post. (At least, not much.)

At the other extreme, we’ve got the situation where you have a large number of items to deal with, possibly dynamically generated – something like "fetch all the URLs in this list". We may well be able to launch some or even all of those operations in parallel, but we’re likely to use the results as a general collection. We’re doing the same thing with each of multiple inputs. This is data parallelism. I’m not going to write about that in this post either.

This post is about task parallelism. We want to execute multiple tasks in parallel (which may or may not mean using multiple threads – there could be multiple asynchronous web service calls, for example) and get all the results back before we proceed. Some of the tasks may be the same, but in general they’re not.

Describing the sample scenario

To help make everything sound vaguely realistic, I’m going to use a potentially genuine scenario, based on Stack Overflow. I’d like to make it clear that I have no idea how Stack Overflow really works, so please don’t make any assumptions. In particular, we’re only dealing with a very small portion of what’s required to render a single page. Nevertheless, it gives us something to focus on. (This is a situation I’ve found myself in several times at work, but obviously the internal services at Google are confidential, so I can’t start talking about a really real example.)

As part of rendering a Stack Overflow page for a logged in user, let’s suppose we need to:

  • Authenticate the user’s cookie (which gives us the user ID)
  • Find out the preferences for the user (so we know which tags to ignore etc)
  • Find out the user’s current reputation
  • Find out if there have been any recent comments or badges

All of these can be asynchronous operations. The first one needs to be executed before any of the others, but the final three can all be executed in parallel. We need all the results before we can make any more progress.

I’m going to assume an appropriate abstraction which contains the relevant asynchronous methods. Something like this:

public interface IStackService
{
    Task<int?> AuthenticateUserAsync(string cookie);
    Task<UserSettings> GetSettingsAsync(int userId);
    Task<int> GetReputationAsync(int userId);
    Task<RecentActivity> GetRecentActivityAsync(int userId);
}

The simple "single-threaded" implementation

I’ve put "single-threaded" in quotes here, because this may actually run across multiple-threads, but only one operation will be executed at a time. This is really just here for reference – because it’s so easy, and it would be nice to get the same simplicity with more parallelism.

public async Task<Page> RenderPage(Request request) 

    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie); 
    if (maybeUserId == null
    { 
        return RenderUnauthenticatedPage(); 
    }
    int userId = maybeUserId.Value;
    
    UserSettings settings = await service.GetUserSettingsAsync(userId);
    int reputation = await service.GetReputationAsync(userId);
    RecentActivity activity = await service.GetRecentActivityAsync(userId);

    // Now combine the results and render the page
}

Just to be clear, this is still better than the obvious synchronous equivalent. While those asynchronous calls are executing, we won’t be sat in a blocked thread, taking very little CPU but hogging a decent chunk of stack space. The scheduler will have less work to do. Life will be all flowers and sunshine… except for the latency.

If each of those requests takes about 100ms, it will take 400ms for the whole lot to complete – when it could take just 200ms. We can’t do better than 200ms with the operations we’ve got to work with: we’ve got to get the user ID before we can perform any of the other operations – but we can do all the other three in parallel. Let’s try doing that using the tools we’ve got available to us and no neat tricks, to start with.

Declaring tasks and waiting for them

First, let’s talk about TaskEx.WhenAll(). This is a method provided in the CTP library, and I wouldn’t be surprised to see it move around a bit over time. There are a bunch of overloads for this – some taking multiple Task<TResult> items, and some being more weakly typed. It simply lets you wait for multiple tasks to complete – and because it returns a task itself, we can "await" it in the usual asyncrhonous way. In this case we have to use the weakly typed version, because our tasks are of different types. That’s fine though, because we’re not going to use the result anyway, except for waiting. (And in fact we’ll let the compiler deal with that for us.)

The code for this isn’t too bad, but it’s a bit more long-winded:

public async Task<Page> RenderPage(Request request) 

    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie); 
    if (maybeUserId == null
    { 
        return RenderUnauthenticatedPage(); 
    }
    int userId = maybeUserId.Value;
    
    var settingsTask = service.GetUserSettingsAsync(userId);
    var reputationTask = service.GetReputationAsync(userId);
    var activityTask = service.GetRecentActivityAsync(userId);
    
    // This overload of WhenAll just returns Task, so there’s no result
    // to wait for: we’ll get the various results from the tasks themselves
    await TaskEx.WhenAll(settingsTask, reputationTask, activityTask);

    // By now we know that the result of each task is available
    UserSettings settings = settingsTask.Result;
    int reputation = reputationTask.Result;
    RecentActivity activity = activityTask.Result;

    // Now combine the results and render the page
}

This is still nicer than the pre-C# 5 code to achieve the same results, but I’d like to think we can do better. Really we just want to express the tasks once, wait for them all to complete, and get the results into variables, just like we did in the one-at-a-time code. I’ve thought of two approaches for this: one using anonymous types, and one using tuples. Both require changes to be viable – although the tuple approach is probably more realistic. Let’s look at it.

EDIT: Just before we do, I’d like to include code from one of the comments. If we’re going to use all the results directly, we can just await them in turn rather than using WhenAll – it’s like joining one thread after another. That leads to code like this:

public async Task<Page> RenderPage(Request request)  
{  
    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie);  
    if (maybeUserId == null)  
    {  
        return RenderUnauthenticatedPage();  
    } 
    int userId = maybeUserId.Value; 
     
    var settingsTask = service.GetUserSettingsAsync(userId); 
    var reputationTask = service.GetReputationAsync(userId); 
    var activityTask = service.GetRecentActivityAsync(userId); 
     
    UserSettings settings = await settingsTask;
    int reputation = await reputationTask;
    RecentActivity activity = await activityTask;

    // Now combine the results and render the page
}

I definitely like that more. Not sure why I didn’t think of it before…

Now back to the original post…

An ideal world of tuples

I’m assuming you’re aware of the family of System.Tuple types. They were introduced in .NET 4, and are immutable and strongly typed, both of which are nice features. The downsides are that even with type inference they’re still slightly awkward to create, and extracting the component values requires using properties such as Item1, Item2 etc. The C# compiler is completely unaware of tuples, which is slightly annoying. I would like two new features in C# 5:

  • Tuple literals: the ability to write something like var tuple = ("Foo", 10); to create a Tuple<string, int> – I’m not overly bothered with the exact syntax, so long as it’s concise.
  • Assignment to multiple variables from a single tuple. For example: var (ok, value) = int.TryParseToTuple("10");. Assuming a method with signature Tuple<bool, int> TryParseToTuple(string text) this would make ok a variable of type bool, and value a variable of type int.

Just to pre-empt others, I’m aware that F# helps on this front already. C# could do with catching up :)

Anyway, imagine we’ve got those language features. Then imagine a set of extension methods looking like this, but with another overload for 2-value tuples, another for 4-value tuples etc:

public static class TupleExtensions
{
    public static async Task<Tuple<T1, T2, T3>> WhenAll<T1, T2, T3>
        (this Tuple<Task<T1>, Task<T2>, Task<T3>> tasks)
    {
        await TaskEx.WhenAll(tasks.Item1, tasks.Item2, tasks.Item3);
        return Tuple.Create(tasks.Item1.Result, tasks.Item2.Result, tasks.Item3.Result);
    }
}

It can look a bit confusing because of all the type arguments and calls to Result and ItemX properties… but essentially it takes a tuple of tasks, and returns a task of a tuple returning the component values. How does this help us? Well, take a look at our Stack Overflow code now:

public async Task<Page> RenderPage(Request request) 

    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie); 
    if (maybeUserId == null
    { 
        return RenderUnauthenticatedPage(); 
    }
    int userId = maybeUserId.Value;
    
    var (settings, reputation, activity) = await (service.GetUserSettingsAsync(userId),
                                                  service.GetReputationAsync(userId),
                                                  service.GetRecentActivityAsync(userId))
                                                 .WhenAll();

    // Now combine the results and render the page
}

If we knew we always wanted to wait for all the tasks, we could actually change our extension method to one called GetAwaiter which returned a TupleAwaiter or something like that – so we could get rid of the call to WhenAll completely. However, I’m not sure that would be a good thing. I like explicitly stating how we’re awaiting the completion of all of these tasks.

The real world of tuples

Back in the real world, we don’t have these language features on tuples. We can still use the extension method, but it’s not quite as nice:

public async Task<Page> RenderPage(Request request)  
{  
    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie);  
    if (maybeUserId == null)  
    {  
        return RenderUnauthenticatedPage();  
    } 
    int userId = maybeUserId.Value; 
     
    var results = await Tuple.Create(service.GetUserSettingsAsync(userId), 
                                     service.GetReputationAsync(userId), 
                                     service.GetRecentActivityAsync(userId)) 
                             .WhenAll(); 

    var settings = results.Item1;
    var reputation = results.Item2;
    var activity = results.Item3;

    // Now combine the results and render the page
}

We’ve got an extra local variable we don’t need, and the ugliness of the ItemX properties is back. Oh well. Maybe tuples aren’t the best approach. Let’s look at a closely related cousin, anonymous types…

An ideal world of anonymous types

Extension methods on anonymous types are somewhat evil. They’re potentially powerful, but they definitely have drawbacks. Aside from anything else, you can’t add a generic constraint to require that a type is anonymous, and you certainly can’t add a generic constraint to say that each member of the anonymous type must be a task (which is what we want here). But the difficulties go further than that. I would like to be able to use something like this:

public async Task<Page> RenderPage(Request request)  
{  
    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie);  
    if (maybeUserId == null)  
    {  
        return RenderUnauthenticatedPage();  
    } 
    int userId = maybeUserId.Value; 
     
    var results = await new { Settings = service.GetUserSettingsAsync(userId), 
                              Reputation = service.GetReputationAsync(userId), 
                              Activity = service.GetRecentActivityAsync(userId) }
                        .WhenAll(); 

    // Use results.Settings, results.Reputation and results.Activity to render
    // the page
}

Now in this magical world, we’d have an extension method on T where T : class which would check that all the properties were of type Task<TResult> (with a different TResult for each property, potentially) and return a task of a new anonymous type which had the same properties… but without the task part. Essentially, we’re trying to perform the same inversion that we did with tuples, moving where the task "decorator" bit comes. We can’t do that with anonymous types – there’s simply no way of expressing it in the language. We could potentially generate a new type at execution time, but there’s no way of getting compile-time safety.

These two problems suggest two different solutions though. Firstly – and more simply – if we’re happy to lose compile-time safety, we can use the dynamic typing from C# 4.

The real world of anonymous types and dynamic

We can fairly easily write an async extension method to create a Task<dynamic>. The code would involve reflection to extract the tasks from the instance, call TaskEx.WhenAll to wait for them to complete, and then populate an ExpandoObject. I haven’t included the extension method here because frankly reflection code is pretty boring, and the async part of it is what we’ve seen everywhere else. Here’s what the consuming code might look like though:

public async Task<Page> RenderPage(Request request)  
{  
    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie);  
    if (maybeUserId == null)  
    {  
        return RenderUnauthenticatedPage();  
    } 
    int userId = maybeUserId.Value; 
     
    dynamic results = await new { Settings = service.GetUserSettingsAsync(userId), 
                                  Reputation = service.GetReputationAsync(userId), 
                                  Activity = service.GetRecentActivityAsync(userId) }
                            .WhenAllDynamic(); 

    UserSettings settings = results.Settings;
    int reputation = results.Reputation;
    RecentActivity activity = results.Activity;

    // Use our statically typed variables in the rest of the code
}

The extra local variables are back, because I don’t like being dynamic for more type than I can help. Here, once we’ve copied the results into our local variables, we can ignore the dynamic results variable for the rest of the code.

This is pretty ugly, but it would work. I’m not sure that it’s significantly better than the "works but uses ItemX" tuple version though.

Now, what about the second thought, about the difficulty of translating (Task<X>, Task<Y>) into a Task<(X, Y)>?

Monads

I’m scared. Writing this post has made me start to think I might be starting to grok monads. This is almost certainly an incorrect belief, but I’m sure I’m at least making progress. If we think of "wrapping a task around a type" as a sort of type decoration, it starts sounding similar to the description of monads that I’ve read before. The fact that async workflows in F# are one example of its monad support encourages me too. I have a sneaking suspicion that the async/await support in C# 5 is partial support for this specific monad – in particular, you express the result of an async method via a non-task-related return statement, but the declared return type is the corresponding "wrapped" type.

Now, C#’s major monadic support comes in the form of LINQ, and particularly SelectMany. Therefore – and I’m writing as I think here – I would like to end up being able to write something like this:

public async Task<Page> RenderPage(Request request)  
{  
    int? maybeUserId = await service.AuthenticateUserAsync(request.Cookie);  
    if (maybeUserId == null)  
    {  
        return RenderUnauthenticatedPage();  
    } 
    int userId = maybeUserId.Value; 

    var results = await from settings in service.GetUserSettingsAsync(userId)
                        from reputation in service.GetReputationAsync(userId)
                        from activity in service.GetActivityAsync(userId)
                        select new { settings, reputation, activity };

    // Use results.settings, results.reputation, results.activity for the
    // rest of the code
}

That feels like it should work, but as I write this I genuinely don’t know whether or not it will.

What I do know is that we only actually to write a single method to get that to work: SelectMany. We don’t even need to implement a Select method, as if there’s only a select clause following an extra from clause, the compiler just uses SelectMany and puts a projection at the end. We want to be able to take an existing task and a way of creating a new task from it, and somehow combine them.

Just to make it crystal clear, the way we’re going to use LINQ is not for sequences at all. It’s for tasks. So we don’t want to see IEnumerable<T> anywhere in our final signatures. Let’s see what we can do.

(10 minutes later.) Okay, wow. I’d expected it to be at least somewhat difficult to get it to compile. I’m not quite there yet in terms of parallelization, but I’ve worked out a way round that. Just getting it to work at all is straightforward. I started off by looking at the LINQ to Objects signature used by the compiler:

public static IEnumerable<TResult> SelectMany<TSource, TCollection, TResult>(
    this IEnumerable<TSource> source,
    Func<TSource, IEnumerable<TCollection>> collectionSelector,
    Func<TSource, TCollection, TResult> resultSelector
)

Now we want our tasks to end up being independent, but let’s start off simply, just changing IEnumerable to Task everywhere, and changing the type parameter names:

public static Task<TResult> SelectMany<T1, T2, TResult>(
    this Task<T1> source,
    Func<T1, Task<T2>> taskSelector,
    Func<T1, T2, TResult> resultSelector
)

There’s still that nagging doubt about the dependency of the second task on the first, but let’s at least try to implement it.

We know we want to return a Task<TResult>, and we know that given a T1 and a T2 we can get a TResult. We also know that by writing an async method, we can ask the compiler to go from a return statement involving a TResult to a method with a declared return type of Task<TResult>. Once we’ve got that hint, the rest is really straightforward:

public static async Task<TResult> SelectMany<T1, T2, TResult>
    (this Task<T1> source,
     Func<T1, Task<T2>> taskSelector,
     Func<T1, T2, TResult> resultSelector)
{
    T1 t1 = await source;
    T2 t2 = await taskSelector(t1);
    return resultSelector(t1, t2);
}

There it is. We asynchronously await the result of the first task, feed the result into taskSelector to get the second task, await that task to get a second value, and then combine the two values with the simple projection to give the result we want to return asynchronously.

In monadic terms as copied from Wikipedia, I believe that:

  • The type constructor for the async monad is simply that T goes to Task<T> for any T.
  • The unit function is essentially what the compiler does for us when we declare a method as async – it provides the "wrapping" to get from a return statement using T to a method with a return type of Task<T>.
  • The binding operation is what we’ve got above – which should be no surprise, as SelectMany is the binding function in "normal" LINQ.

I’m breathless with the simultaneous simplicity, beauty and complexity of it all. It’s simple because once I’d worked out the method signature (which is essentially what the definition of the binding function requires) the method wrote itself. It’s beautiful because once I’d picked the right method to use, the compiler did everything else for me – despite it sounding really significantly different to LINQ. It’s complex because I’m still feeling my way through all of this.

It’s a shame that after all of this, we still haven’t actually got what we wanted. To do that, we have to fake it.

Improper monads

("Improper monads" isn’t a real term. It scores 0 hits on Google at the moment – by the time you read this, that count will probably be higher, but only because of this post.)

We wanted to execute the tasks in parallel. We’re not actually doing so. We’re executing one task, then another. Oops. The problem is that our monadic definition says that we’re going to rely on the result of one task to generate the other one. We don’t want to do that. We want to get both tasks, and execute them at the same time.

Unfortunately, I don’t think there’s anything in LINQ which represents that sort of operation. The closest I can think of is a join – but we’re not joining on anything. I’m pretty sure we could do this by implementing InnerJoin and just ignoring the key selectors, but if we’re going to cheat anyway, we might as well cheat with the signature we’ve got. In this cheating version of LINQ, we assume that the task selector (which produces the second task) doesn’t actually rely on the argument it’s given. So let’s just give it anything – the default value, for example. Then we’ve got two tasks which we can await together using WhenAll as before.

public static async Task<TResult> SelectMany<T1, T2, TResult>
    (this Task<T1> task1,
     Func<T1, Task<T2>> taskSelector,
     Func<T1, T2, TResult> resultSelector)
{
    Task<T2> task2 = taskSelector(default(T1));
    await TaskEx.WhenAll(task1, task2);
    return resultSelector(task1.Result, task2.Result);
}

Okay, that was easy. But it looks like it’s only going to wait for two tasks at a time. We’ve got three in our example. What’s going to happen? Well, we’ll start waiting for the first two tasks when SelectMany is first called… but then we’ll return back to the caller with the result as a task. We’ll then call SelectMany again with the third task. We’ll then wait for [tasks 1 and 2] and [task 3]… which means waiting for all of them. Bingo! Admittedly I’ve a sneaking suspicion that if any task fails it might mean more deeply nested exceptions than we’d want, but I haven’t investigated that yet.

I believe that this implementation lets us basically do what we want… but like everything else, it’s ugly in its own way. In this case it’s ugly because it allows us to express something (a dependency from one task to another) that we then don’t honour. I don’t like that. We could express the fact that getting a user’s reputation depends on authenticating the user first – but we’d end up finding the reputation of user 0, because that’s the result we’d pass in. That sucks.

EDIT: Along the same lines of the previous edit, we can make this code neater and avoid using WhenAll:

public static async Task<TResult> SelectMany<T1, T2, TResult> 
    (this Task<T1> task1, 
     Func<T1, Task<T2>> taskSelector, 
     Func<T1, T2, TResult> resultSelector) 

    Task<T2> task2 = taskSelector(default(T1)); 
    return resultSelector(await task1, await task2); 
}

Back to the original post…

Ironically, someone on Twitter mentioned a new term to me today, which seems strikingly relevant: joinads. They pointed to a research paper written by Tomas Petricek and Don Syme – which on first glance is quite possibly exactly what I’ve been mostly-independently coming up with here. The reason LINQ query expressions don’t quite fit what we want is that they’re based on monads – if they’d been based on joinads, maybe it would all have worked well. I’ll read the paper and see if that gives me the answer. Then I’ll watch Bart de Smet’s PDC 2010 presentation which I gather is rather good.

Conclusion

I find myself almost disappointed. Those of you who already understand monads are quite possibly shaking your heads, saying to yourself that it was about time I started to "get" them (and that I’ve got a long way to go). Those of you who didn’t understand them before almost certainly don’t understand them any better now, given the way this post has been written.

So I’m not sure whether I’ll have any readers left by now… and I’ve failed to come up with a good solution to the original problem. In my view the nicest approach by far is the one using tuples, and that requires more language support. (I’m going to nag Mads about that very shortly.) And yet I’m simultaneously on a huge high. I’m very aware of my own limitations when it comes to computer science theory, but today it feels like I’ve at least grasped the edge of something beautiful.

And now, I must stop blogging before my family life falls apart or my head explodes. Goodnight all.

C# 5 async: experimenting with member resolution (GetAwaiter, BeginAwait, EndAwait)

Some of you may remember the bizarre query expressions I’ve come up with before now. These rely on LINQ finding the members it needs (Select, Where, SelectMany etc) statically but without relying on any particular interface. I was pleased to see that the C# 5 async support is based on the same idea. Here’s the relevant bit of the draft spec:

The expression t of an await-expression await t is called the task of the await expression. The task t is required to be awaitable, which means that all of the following must hold:

  • (t).GetAwaiter() is a valid expression of type A.
  • Given an expression a of type A and an expression r of type System.Action, (a).BeginAwait(r) is a valid boolean-expression.
  • Given an expression a of type A, (a).EndAwait() is a valid expression.

A is called the awaiter type for the await expression. The GetAwaiter method is used to obtain an awaiter for the task.

The BeginAwait method is used to sign up a continuation on the awaited task. The continuation passed is the resumption delegate, which is further explained below.

The EndAwait method is used to obtain the outcome of the task once it is complete.

The method calls will be resolved syntactically, so all of GetAwaiter, BeginAwait and EndAwait can be either instance members or extension methods, or even bound dynamically, as long as the calls are valid in the context where the await expression appears. All of them are intended to be “non-blocking”; that is, not cause the calling thread to wait for a significant amount of time, e.g. for an operation to complete.

As far as I can tell, either the CTP release hasn’t fully implemented this, or I’ve interpreted a bit overly broadly. Still, let’s see what works and what doesn’t. For simplicity, each example is completely self-contained… and does absolutely nothing interesting. It’s only the resolution part which is interesting. (The fact that the Main method is async is quite amusing though, and takes advantage of the fact that async methods can return void instead of a task.)

Example 1: Boring instance members

This is closest to the examples I’ve given so far.

using System;

class Test
{
    static async void Main()
    {
        await new Awaitable();
    }
}

class Awaitable
{
    public Awaiter GetAwaiter()
    {
        return new Awaiter();
    }    
}

class Awaiter
{
    public bool BeginAwait(Action continuation)
    {
        return false;
    }
    
    public int EndAwait()
    {
        return 1;
    }
}

Hopefully this needs no further explanation. Obviously it works fine with the CTP. The compiler generates a call to GetAwaiter, then calls BeginAwait and EndAwait on the returned Awaiter.

Example 2: Extension methods

The CTP uses extension methods to get an awaiter for existing types such as Task – but I don’t think it uses them for BeginAwait/EndAwait. Fortunately, there’s nothing to stop us from using them for everything, and there’s nothing forcing you to put the extension methods on sensible types, either – as demonstrated below:

using System;

class Test
{
    static async void Main()
    {
        Guid guid = await 5;
        Console.WriteLine("Got result: {0}", guid);
    }
}

static class Extensions
{
    public static string GetAwaiter(this int number)
    {
        return number.ToString();
    }
    
    public static bool BeginAwait(this string text, Action continuation)
    {
        Console.WriteLine("Waiting for {0} to finish", text);
        return false;
    }
    
    public static Guid EndAwait(this string text)
    {
        Console.WriteLine("Finished waiting for {0}", text);
        return Guid.NewGuid();
    }
}

I should just emphasize that this code is purely for the sake of experimentation. If I ever see anyone actually extending int and string in this way in production code and blaming me for giving them the idea, I’ll be very cross.

However, it all does actually work. This example is silly but not particularly exotic. Let’s start going a bit further, using dynamic typing.

Example 3: Dynamic resolution

The spec explicitly says that the methods can be bound dynamically, so I’d expect this to work:

using System;
using System.Dynamic;

class Test
{
    static async void Main()
    {
        dynamic d = new ExpandoObject();
        d.GetAwaiter = (Func<dynamic>) (() => d);
        d.BeginAwait = (Func<Action, bool>) (action => {
            Console.WriteLine("Awaiting");
            return false;
        });
        d.EndAwait = (Func<string>)(() => "Finished dynamically");

        string result = await d;
        Console.WriteLine("Result: {0}", result);
    }
}

Unfortunately, in the CTP this doesn’t work – it fails at compile time with this error:

Test.cs(16,25): error CS1991: Cannot await ‘dynamic’

All is not lost, however. We may not be able to make GetAwaiter to be called dynamically, but what about BeginAwait/EndAwait? Let’s try again:

using System;
using System.Dynamic;

class DynamicAwaitable
{
    public dynamic GetAwaiter()
    {
        dynamic d = new ExpandoObject();
        d.BeginAwait = (Func<Action, bool>) (action => {
            Console.WriteLine("Awaiting");
            return false;
        });
        d.EndAwait = (Func<string>)(() => "Finished dynamically");
        return d;
    }
}

class Test
{
    static async void Main()
    {
        string result = await new DynamicAwaitable();
        Console.WriteLine("Result: {0}", result);
    }
}

This time we get more errors:

Test.cs(22,25): error CS1061: ‘dynamic’ does not contain a definition for ‘BeginAwait’ and no extension method ‘BeginAwait’ accepting a first argument of type ‘dynamic’ could be found (are you missing a using directive or an assembly reference?)

Test.cs(22,25): error CS1061: ‘dynamic’ does not contain a definition for ‘EndAwait’ and no extension method ‘EndAwait’ accepting a first argument of type ‘dynamic’ could be found (are you missing a using directive or an assembly reference?)

Test.cs(22,25): error CS1986: The ‘await’ operator requires that its operand ‘DynamicAwaitable’ have a suitable public GetAwaiter method

This is actually worse than before: not only is it not working as I’d expect to, but even the error message has a bug. The await operator doesn’t require that its operand has a suitable public GetAwaiter method – it just has to be accessible. At least, that’s the case with the current CTP. In my control flow post for example, the methods were all internal. It’s possible that the error message is by design, and the compiler shouldn’t have allowed that code, of course – but it would seem a little odd.

Okay, so dynamic resolution doesn’t work. Oh well… let’s go back to static typing, but use delegates, fields and properties.

Example 4: Fields and properties of delegate types

This time we’re back to the style of my original "odd query expressions" post, using fields and properties returning delegates instead of methods:

using System;

class FieldAwaiter
{
    public readonly Func<Action, bool> BeginAwait = continuation => false;
    public readonly Func<string> EndAwait = () => "Result from a property";
}

class PropertyAwaitable
{
    public Func<FieldAwaiter> GetAwaiter
    {
        get { return () => new FieldAwaiter(); }
    }
}

class Test
{
    static async void Main()
    {
        string result = await new PropertyAwaitable();
        Console.WriteLine("Result: {0}", result);
    }
}

Again, I believe this should work according to the spec. After all, this block of code compiles with no problems:

var t = new PropertyAwaitable();
var a = (t).GetAwaiter();
bool sync = (a).BeginAwait(() => {});
string result = (a).EndAwait();

Unfortunately, nothing doing. The version using await fails with this error:

Test.cs(21,25): error CS1061: ‘PropertyAwaitable’ does not contain a definition for ‘GetAwaiter’ and no extension method ‘GetAwaiter’ accepting a first argument of type ‘PropertyAwaitable’ could be found (are you missing a using directive or an assembly reference?)

Test.cs(21,25): error CS1986: The ‘await’ operator requires that its operand ‘PropertyAwaitable’ have a suitable public GetAwaiter method

This wasn’t trying truly weird things like awaiting a class name. Oh well :(

Conclusion

Either I’ve misread the spec, or the CTP doesn’t fully comply to it. This should come as no surprise. It’s not a final release or even a beta. However, it’s fun to investigate the limits of what should be valid. The next question is whether the compiler should be changed, or the spec… I can’t immediately think of any really useful patterns involving returning delegates from properties, for example… so is it really worth changing the compiler to allow it?

 

Update (7th November 2010)

On Thursday I spoke to Mads Torgersen and Lucian Wischik about this. Some changes being considered:

  • The C# spec being tightened up to explicitly say that GetAwaiter/BeginAwait/EndAwait have to be methods, at least when statically typed. In other words, the delegate/property version wouldn’t be expected to work.
  • The BeginAwait pattern may be tightened up to require a return type of exactly Boolean or dynamic (rather than the call being "a boolean-expression" which is very slightly more lax)
  • The dynamic version working – this is more complicated in terms of implementation than one might expect

Just to emphasize, these are changes under consideration rather than promises. They seem entirely reasonable to me. (Dynamic binding sounds potentially useful; property/field resolution definitely less so. Making the spec match the implementation is important to me though :)

C# 5 async: investigating control flow

Yes, this has been a busy few days for blog posts. One of the comments on my previous blog post suggested there may be some confusion over how the control flow works with async. It’s always very possible that I’m the one who’s confused, so I thought I’d investigate.

This time I’m not even pretending to come up with a realistic example. As with my previous code, I’m avoiding creating a .NET Task myself, and sticking with my own custom "awaiter" classes. Again, the aim is to give more insight into what the C# compiler is doing for us. I figure other blogs are likely to concentrate on useful patterns and samples – I’m generally better at explaining what’s going on under the hood from a language point of view. That’s easier to achieve when almost everything is laid out in the open, instead of using the built-in Task-related classes.

That’s not to say that Task doesn’t make an appearance at all, however – because the compiler generates one for us. Note in the code below how the return type of DemonstrateControlFlow is Task<int>, whereas the return value is only an int. The compiler uses Task<T> to wrap up the asynchronous operation. As I mentioned before, that’s the one thing the compiler does which actually requires knowledge of the framework.

The aim of the code is purely to demonstrate how control flows. I have a single async method which executes 4 other "possibly asynchronous" operations:

  • The first operation completes synchronously
  • The second operation completes asynchronously, starting a new thread
  • The third operation completes synchronously
  • The fourth operation completes asynchronously
  • The result is then "returned"

At various points in the code I log where we’ve got to and on what thread. In order to execute a "possibly asynchronous" operation, I’m simply calling a method and passing in a string. If the string is null, the operation completes syncrhonously. If the string is non-null, it’s used as the name of a new thread. The BeginAwait method creates the new thread, and returns true to indicate that the operation is completing asynchronously. The new thread waits half a second (to make things clearer) and then executes the continuation passed to BeginAwait. If you remember, that continuation represents "the rest of the method" – the work to do after the asynchronous operation has completed.

Without further ado, here’s the complete code. As is almost always the case with my samples, it’s a console app:

using System;
using System.Threading;
using System.Threading.Tasks;

public class ControlFlow
{
    static void Main()
    {
        Thread.CurrentThread.Name = "Main";
        
        Task<int> task = DemonstrateControlFlow();
        LogThread("Main thread after calling DemonstrateControlFlow");
        
        // Waits for task to complete, then retrieves the result
        int result = task.Result;        
        LogThread("Final result: " + result);
    }
    
    static void LogThread(string message)
    {
        Console.WriteLine("Thread: {0}  Message: {1}",
                          Thread.CurrentThread.Name, message);
    }
    
    static async Task<int> DemonstrateControlFlow()
    {
        LogThread("Start of method");
        
        // Returns synchronously (still on main thread)
        int x = await MaybeReturnAsync(null);        
        LogThread("After first await (synchronous)");
        
        // Returns asynchronously (return task to caller, new
        // thread is started by BeginAwait, and continuation
        // runs on that new thread).
        x += await MaybeReturnAsync("T1");        
        LogThread("After second await (asynchronous)");
        
        // Returns synchronously – so we’re still running on
        // the first extra thread
        x += await MaybeReturnAsync(null);
        LogThread("After third await (synchronous)");
            
        // Returns asynchronously – starts up another new
        // thread, leaving the first extra thread to terminate,
        // and executing the continuation on the second extra thread.
        x += await MaybeReturnAsync("T2");
        LogThread("After fourth await (asynchronous)");
        
        // Sets the result of the task which was returned ages ago;
        // when this occurs, the main thread
        return 5;
    }
    
    /// <summary>
    /// Returns a ResultFetcher which can have GetAwaiter called on it.
    /// If threadName is null, the awaiter will complete synchronously
    /// with a return value of 1. If threadName is not null, the
    /// awaiter will complete asynchronously, starting a new thread
    /// with the given thread name for the continuation. When EndAwait
    /// is called on such an asynchronous waiter, the result will be 2.
    /// </summary>
    static ResultFetcher<int> MaybeReturnAsync(string threadName)
    {
        return new ResultFetcher<int>(threadName, threadName == null ? 1 : 2);
    }
}

/// <summary>
/// Class returned by MaybeReturnAsync; only exists so that the compiler
/// can include a call to GetAwaiter, which returns an Awaiter[T].
/// </summary>
class ResultFetcher<T>
{
    private readonly string threadName;
    private readonly T result;
    
    internal ResultFetcher(string threadName, T result)
    {
        this.threadName = threadName;
        this.result = result;
    }
    
    internal Awaiter<T> GetAwaiter()
    {
        return new Awaiter<T>(threadName, result);
    }
}

/// <summary>
/// Awaiter which actually starts a new thread (or not, depending on its
/// constructor arguments) and supplies the result in EndAwait.
/// </summary>
class Awaiter<T>
{
    private readonly string threadName;
    private readonly T result;
    
    internal Awaiter(string threadName, T result)
    {
        this.threadName = threadName;
        this.result = result;
    }
    
    internal bool BeginAwait(Action continuation)
    {
        // If we haven’t been given the name of a new thread, just complete
        // synchronously.
        if (threadName == null)
        {
            return false;
        }

        // Start a new thread which waits for half a second before executing
        // the supplied continuation.
        Thread thread = new Thread(() =>
        {
            Thread.Sleep(500);
            continuation();
        });
        thread.Name = threadName;
        thread.Start();
        return true;
    }

    /// <summary>
    /// This is called by the async method to retrieve the result of the operation,
    /// whether or not it actually completed synchronously.
    /// </summary>
    internal T EndAwait()
    {
        return result;
    }
}

And here’s the result:

Thread: Main  Message: Start of method
Thread: Main  Message: After first await (synchronous)
Thread: Main  Message: Main thread after calling DemonstrateControlFlow
Thread: T1  Message: After second await (asynchronous)
Thread: T1  Message: After third await (synchronous)
Thread: T2  Message: After fourth await (asynchronous)
Thread: Main  Message: Final result: 5

A few things to note:

  • I’ve used two separate classes for the asynchronous operation: the one returned by the MaybeReturnAsync method (ResultFetcher<T>),  and the Awaiter<T> class returned by ResultFetcher<T>.GetAwaiter(). In the previous blog post I used the same class for both aspects, and GetAwaiter() returned this. It’s not entirely clear to me under what situations a separate awaiter class is desirable. It feels like it should mirror the IEnumerable<T>/IEnumerator<T> reasoning for iterators, but I haven’t thought through the details of that just yet.
  • If a "possibly asynchronous" operation actually completes synchronously, it’s almost as if we didn’t use "await" at all. Note how the "After first await" is logged on the Main thread, and "After third await" is executed on T1 (the same thread as the "After second await" message). I believe there could be some interesting differences if BeginAwait throws an exception, but I’ll investigate that in another post.
  • When the first "properly asynchronous" operation executes, that’s when control is returned to the main thread. It doesn’t have the result yet of course, but it has a task which it can use to find out the result when it’s ready – as well as checking the status and so on.
  • The compiler hasn’t created any threads for us – the only extra threads were created explicitly when we began an asynchronous operation. One possible difference between this code and a real implementation is that MaybeReturnAsync doesn’t actually start the operation itself at all. It creates something which is able to start the operation, but waits until the BeginAwait call before it starts the thread. This made our example easier to write, because it meant we could wait until we knew what the continuation would be before we started the thread.
  • The return statement in our async method basically sets the result in the task. At that point in this particular example, our main thread is blocking, waiting for the result – so it becomes unblocked as soon as the task has completed.

If you’ve been following along with Eric, Anders and Mads, I suspect that none of this is a surprise to you, other than possibly the details of the methods called by the compiler (which are described clearly in the specification). I believe it’s worth working through a simple-but-useless example like this just to convince you that you know what’s going on. If the above isn’t clear – either in terms of what’s going on or why, I’d be happy to draw out a diagram with what’s happening on each thread. As I’m rubbish at drawing, I’ll only do that when someone asks for it.

Next topic: well, what would you like it to be? I know I’ll want to investigate exception handling a bit soon, but if there’s anything else you think I should tackle first, let me know in the comments.

C# 5 async and choosing terminology: why I’m against “yield”

Eric Lippert’s latest (at the time of this writing) blog post asks for suggestions for alternatives to the new "await" keyword, which has inappropriate connotations that confuse people. In particular, it sounds like the thread is going to sit there waiting for the result, when the whole point is that it doesn’t.

There have been many suggestions in the comments, and lots of them involve "yield". I was initially in favour of this too, but on further reflection I don’t think it’s appropriate, for the same reason: it has a connotation which may not be true. It sounds like it’s always going to yield control, when sometimes it doesn’t. To demonstrate this, I’ve come up with a tiny example. It’s a stock market class which allows you to compute the total value of your holdings asynchronously. The class would make web service calls to fetch real prices, but then cache values for some period. The caching bit is the important part here – and in fact it’s the only part I’ve actually implemented.

The point is that when the asynchronous "total value" computation can fetch a price from the cache, it doesn’t need to wait for anything, so it doesn’t need to yield control. This is the purpose of the return value of BeginAwait: if it returns false, the task has been completed and EndAwait can be called immediately. In this case the continuation is ignored – when BeginAwait returns, the ComputeTotalValue method keeps going rather than returning a Task to the caller.

Here’s the complete code:

using System;
using System.Collections.Generic;
using System.Threading.Tasks;

class Test
{
    static void Main()
    {
        StockMarket market = new StockMarket();
        market.AddCached("GOOG", 613.70m);
        market.AddCached("MSFT", 26.67m);
        market.AddCached("AAPL", 300.98m);
        
        Task<decimal> total = market.ComputeTotalValueAsync(
            Tuple.Create("AAPL", 10),
            Tuple.Create("GOOG", 20),
            Tuple.Create("MSFT", 25)
        );
        Console.WriteLine("Completed already? {0}", total.IsCompleted);
        Console.WriteLine("Total value: {0}", total.Result);
    }
}

class StockMarket
{
    private readonly Dictionary<string, decimal> cache =
        new Dictionary<string, decimal>();
    
    internal void AddCached(string ticker, decimal price)
    {
        cache[ticker] = price;
    }
    
    public async Task<decimal> ComputeTotalValueAsync(params Tuple<string, int>[] holdings)
    {
        // In real code we may well want to parallelize this, of course
        decimal total = 0m;
        foreach (var pair in holdings)
        {
            total += await new StockFetcher(this, pair.Item1) * pair.Item2;
        }
        Console.WriteLine("Diagnostics: completed ComputeTotalValue");
        return total;
    }
    
    private class StockFetcher
    {
        private readonly StockMarket market;
        private readonly string ticker;
        private decimal value;
        
        internal StockFetcher(StockMarket market, string ticker)
        {
            this.market = market;
            this.ticker = ticker;
        }
        
        internal StockFetcher GetAwaiter()
        {
            return this;
        }
        
        internal bool BeginAwait(Action continuation)
        {
            // If it’s in the cache, we can complete synchronously
            if (market.cache.TryGetValue(ticker, out value))
            {            
                return false;
            }
            // Otherwise, we need to make an async web request and do
            // cunning things. Not implemented :)
            throw new NotImplementedException("Oops.");
        }
        
        internal decimal EndAwait()
        {
            // Usually more appropriate checking here, of course
            return value;
        }
    }
}

(Note that we’d probably have a public method to fetch a single stock value asynchronously too, and that would probably return a task – in this case I wanted to keep everything as simple as possible, not relying on any other implementation of asynchrony. This also shows how the compiler uses GetAwait/BeginAwait/EndAwait… and that they don’t even need to be public methods.)

The result shows that everything was actually computed synchronously – the returned task is complete by the time the method returns. You may be wondering why we’ve bothered using async at all here – and the key is the bit that throws the NotImplementedException. While everything returns synchronously in this case, we’ve allowed for the possibility of asynchronous fetching, and the only bit of code which would need to change is BeginAwait.

So what does this have to do with the choice of keywords? It shows that "yield" really isn’t appropriate here. When the action completes very quickly and synchronously, it isn’t yielding at all.

What’s the alternative?

There are two aspects of the behaviour of the current "await" contextual keyword:

  • We might yield control, returning a task to the caller.
  • We will continue processing at some point after the asynchronous subtask has completed – whether it’s completed immediately or whether our continuation is called back.

It’s hard to capture both of those aspects in one or two words, but I think it make sense to at least capture the aspect which is always valid. So I propose something like "continue after":

foreach (var pair in holdings) 

    total += continue after new StockFetcher(this, pair.Item1) * pair.Item2; 
}

I’m not particularly wedded to the "after" bit – it could be "when" or "with" for example.

I don’t think this is perfect – I’m really just trying to move the debate forward a bit. I think the community doesn’t really have enough of a "feeling" for the new feature yet to come up with a definitive answer at the moment (and I include myself there). I think focusing on which aspects we want to emphasize – with a clear understanding of how the feature actually behaves – it a good start though.