The importance of context, and a question of explicitness

(Just to be clear: I hate the word "explicitness". It reminds me of Rowan Atkinson as Marcus Browning MP, saying we should be purposelessnessless. But I can’t think of anything better here.)

For the last few days, I’ve been thinking about context – in the context of C# 5’s proposed asynchronous functionality. Now many of the examples which have been presented have been around user interfaces, with the natural restriction of doing all UI operations on the UI thread. While that’s all very well, I’m more interested in server-side programming, which has a slightly different set of restrictions and emphases.

Back in my post about configuring waiting, I talked about the context of where execution would take place – both for tasks which require their own thread and for the continuations. However, thinking about it further, I suspect we could do with richer context.

What might be included in a context?

We’re already used to the idea of using context, but we’re not always aware of it. When trying to service a request on a server, some or any of the following may be part of our context:

  • Authentication information: who are we acting as? (This may not be the end user, of course. It may be another service who we trust in some way.)
  • Cultural information: how should text destined for an end user by rendered? What other regional information is relevant?
  • Threading information: as mentioned before, what threads should be used both for "extra" tasks and continuations? Are we dealing with thread affinity?
  • Deadlines and cancellation: the overall operation we’re trying to service may have a deadline, and operations we create may have their own deadlines too. Cancellation tokens in TPL can perform this role for us pretty easily.
  • Logging information: if the logs need to tie everything together, there may be some ID generated which should be propagated.
  • Other request information: very much dependent on what you’re doing, of course…

We’re used to some of this being available via properties such as CultureInfo.CurrentCulture and HttpContext.Current – but those are tied to a particular thread. Will they be propagated to threads used for new tasks or continuations? Historically I’ve found that documentation has been very poor around this area. It can be very difficult to work out what’s going to happen, even if you’re aware that there’s a potential problem in the first place.

Explicit or implicit?

It’s worth considering what the above items have in common. Why did I include those particular pieces of information but not others? How can we avoid treating them as ambient context in the first place?

Well, fairly obviously we can pass all the information we need along via method arguments. C# 5’s async feature actually makes this easier than it was before (and much easier that it would have been without anonymous functions) because the control flow is simpler. There should be fewer method calls, each of which would each require decoration with all the contextual information required.

However, in my experience that becomes quite problematic in terms of separation of concerns. If you imagine the request as a tree of asynchronous operations working down from a top node (whatever code initially handles the request), each node has to provide all the information required for all the nodes within its subtree. If some piece of information is only required 5 levels down, it still needs to be handed on at each level above that.

The alternative is to use an implicit context – typically via static methods or properties which have to do the right thing, typically based on something thread-local. The context code itself (in conjunction with whatever is distributing the work between threads) is responsible for keeping track of everything.

It’s easy to point out pros and cons to both approaches:

  • Passing everything through methods makes the dependencies very obvious
  • Changes to "lower" tasks (even for seemingly innocuous reasons such as logging) end up causing chains of changes higher up the task tree – possibly to developers working on completely different projects, depending on how your components work
  • It feels like there’s a lot of work for very little benefit in passing everything explicitly through many layers of tasks
  • Implicit context can be harder to unit test elegantly – as is true of so many things using static calls
  • Implicit context requires everyone to use the same context. It’s no good high level code indicating which thread pool to use in one setting when some lower level code is going to use a different context

Ultimately it feels like a battle between purity and pragmatism: being explicit helps to keep your code purer, but it can mean a lot of fluff around your real logic, just to maintain the required information to pass onward. Different developers will have different approaches to this, but I suspect we want to at least keep the door open to both designs.

The place of Task/Task<T>

Even if Task/Task<T> can pass on the context for scheduling, what do we do about other information (authentication etc)? We have types like ThreadLocal<T> – in a world where threads are more likely to be reused, and aren’t really our unit of asynchrony, do we effectively need a TaskLocal<T>? Can context within a task be pushed and automatically popped, to allow one subtree to "override" the context for its nodes, while another subtree works with the original context?

I’ve been trying to think about whether this can be provided in "userland" code instead of in the TPL itself, but I’m not sure it can, easily… at least not without reinventing a lot of the existing code, which is never a good idea when it’s tricky parallelization code.

Should this be general support, or would it be okay to stick to just TaskScheduler.Current, leaving developers to pass other context explicitly?

Conclusion

These are thoughts which I’m hoping will be part of a bigger discussion. I think it’s something the community should think about and give feedback to Microsoft on well before C# 5 (and whatever framework it comes with) ships. I have lots of contradictory feelings about the right way to go, and I’m fully expecting comments to have mixed opinions too.

I’m sure I’ll be returning to this topic as time goes on.

Addendum (March 27th 2012)

Lucian Wischik recently mailed me about this post, to mention that F#’s support for async has had the ability to retain explicit context from the start. It’s also more flexible than the C# async support – effectively, it allows you to swap out AsyncTaskMethodBuilder etc for your own types, so you don’t always have to go via Task/Task<T>. I’ll take Lucian’s word for that, not knowing much about F# myself. One day…

Multiple exceptions yet again… this time with a resolution

I’ve had a wonderful day with Mads Torgersen, and amongst other things, we discussed multiple exceptions and the way that the default awaiter for Task<T> handles an AggregateException by taking the first exception and discarding the rest.

I now have a much clearer understanding of why this is the case, and also a workaround for the cases where you really want to avoid that truncation.

Why truncate in the first place?

(I’ll use the term "truncate" throughout this post to mean "when an AggregatedException with at least one nested exception is caught by EndAwait, throw the first nested exception instead". It’s just a shorthand.)

Yesterday’s post on multiple exceptions showed what you got if you called Wait() on a task returned from an async method. You still get an AggregateException, so why bother to truncate it?

Let’s consider a slightly different situation: where we’re awaiting an async method that throws an exception, and you want to be able to catch some specific exception that will be thrown by that asynchronous method. Imagine we used my NaiveAwaiter class. That would mean we would have to catch AggregateException, check whether one of those exceptions was actually present, and then handle that. There’d then be an open question about what to do if there were other exceptions as well… but that would be a relatively rare case. (Remember, we’re talking about multiple "top level" exceptions within the AggregateException – not just one exception nested in another, nested in another etc.)

With the current awaiter behaviour, you can catch the exception exactly as you would have done in synchronous code. Here’s an example:

using System;
using System.Threading.Tasks;
using System.Collections.Generic;

public class BangException : Exception 
{
    public BangException(string message) : base(message) {}
}

public class Test
{
    public static void Main()
    {
        FrobAsync().Wait();
    }
    
    private static async Task FrobAsync()
    {
        Task fuse = DelayedThrow(500);
        try
        {
            await fuse;
        }
        catch (BangException e)
        {
            Console.WriteLine("Caught it! ({0})", e.Message);
        }
    }
    
    static async Task DelayedThrow(int delayMillis) 
    { 
        await TaskEx.Delay(delayMillis);
        throw new BangException("Went bang after " + delayMillis + "ms");
    }
}

Nice and clean exception handling… assuming that the task we awaited asynchronously didn’t have multiple exceptions. (Note the improved DelayedThrow method, by the way. Definitely cleaner than my previous version.)

This aspect of "the async code looks like the synchronous code" is the important bit. One of the key aims of the language feature is to make it easy to write asynchronous code as if it were synchronous – because that’s what we’re used to, and what we know how to reason about. We’re fairly used to the idea of catching one exception… not so much on the "multiple things can go wrong at the same time" front.

So that handles the primary case where we really expect to only have one exception (if any) because we’re only performing one job.

What about cases where multiple exceptions are somewhat expected?

Let’s go back to the case where we really to propagate multiple exceptions. I think it’s reasonable that this should be an explicit opt-in, so let’s think about an extension method. For the sake of simplicity I’ll use Task – in real life we’d want Task<T> as well, of course. So for example, this line:

await TaskEx.WhenAll(t1, t2);

would become this:

await TaskEx.WhenAll(t1, t2).PreserveMultipleExceptions();

(Yes, the name is too long… but you get the idea.)

Now, there are two ways we could make this work:

  • We could make the extension method return something which had a GetAwaiter method, returning something which in turn had BeginAwait and EndAwait methods. This means making sure we get all of the awaiter code right, of course – and the returned value has little meaning outside an await expression.
  • We could wrap the task in another task, and use the existing awaiter code. We know that the EndAwait extension method associated with Task (and Task<T>) will go into a single level of AggregateException – but I don’t believe it will do any more than that. So if it’s going to strip one level of exception aggregation off, all we need to do is add another level.

According to Mads, the latter of these is easier. Let’s see if he’s right.

We need an extension method on Task, and we’re going to return Task too. How can we implement that?

  • We can’t await the task, because that will strip the exception before we get to it.
  • We can’t write an async task but call Wait() on the original task, because that will block immediately – we still want to be async.
  • We can use a TaskCompletionSource<T> to build a task. We don’t care about the actual result, so we’ll use TaskCompletionSource<object>. This will actually build a Task<object>, but we’ll return it as a Task anyway, and use a null result if it completes with no exception. (This was Mads’ suggestion.)

So, we know how to build a Task, and we’ve been given a Task – how do we hook the two together? The answer is to ask the original task to call us back when it completes, via the ContinueWith method. We can then set the result of our task accordingly. Without further ado, here’s the code:

public static Task PreserveMultipleExceptions(this Task originalTask)
{
    var tcs = new TaskCompletionSource<object>();
    originalTask.ContinueWith(t => {
        switch (t.Status) {
            case TaskStatus.Canceled:
                tcs.SetCanceled();
                break;
            case TaskStatus.RanToCompletion:
                tcs.SetResult(null);
                break;
            case TaskStatus.Faulted:
                tcs.SetException(originalTask.Exception);
                break;
        }
    }, TaskContinuationOptions.ExecuteSynchronously);
    return tcs.Task;
}

This was thrown together in 5 minutes (in the middle of a user group talk by Mads) so it’s probably not as robust as it might be… but the idea is that when the original task completes, we just piggy-back on the same thread very briefly to make our own task respond appropriately. Now when some code awaits our returned task, we’ll add an extra wrapper of AggregateException on top, ready to be unwrapped by the normal awaiter.

Note that the extra wrapper is actually added for us really, really easily – we just call TaskCompletionSource<T>.SetException with the original task’s AggregateException. Usually we’d call SetException with a single exception (like a BangException) and the method automatically wraps it in an AggregateException – which is exactly what we want.

So, how do we use it? Here’s a complete sample (just add the extension method above):

using System;
using System.Threading.Tasks;

public class BangException : Exception  

    public BangException(string message) : base(message) {} 
}

public class Test
{
    public static void Main()
    {
        FrobAsync().Wait();
    }
    
    public static async Task FrobAsync()
    {
        try
        {
            Task t1 = DelayedThrow(500);
            Task t2 = DelayedThrow(1000);
            Task t3 = DelayedThrow(1500);
            
            await TaskEx.WhenAll(t1, t2, t3).PreserveMultipleExceptions();
        }
        catch (AggregateException e)
        {
            Console.WriteLine("Caught {0} aggregated exceptions", e.InnerExceptions.Count);
        }
        catch (Exception e)
        {
            Console.WriteLine("Caught non-aggregated exception: {0}", e.Message);
        }
    }
    
    static async Task DelayedThrow(int delayMillis)  
    {  
        await TaskEx.Delay(delayMillis); 
        throw new BangException("Went bang after " + delayMillis + "ms"); 
    }
}

The result is what we were after:

Caught 3 aggregated exceptions

The blanket catch (Exception e) block is there so you can experiment with what happens if you remove the call to PreserveMultipleExceptions – in that case we get the original behaviour of a single BangException being caught, and the others discarded.

Conclusion

So, we now have answers to both of my big questions around multiple exceptions with async:

  • Why is the default awaiter truncating exceptions? To make asynchronous exception handling look like synchronous exception handling in the common case.
  • What can we do if that’s not the behaviour we want? Either write our own awaiter (whether that’s invoked explicitly or implicitly via "extension method overriding" as shown yesterday) or wrap the task in another one to wrap exceptions.

I’m happy again. Thanks Mads :)

Using extension method resolution rules to decorate awaiters

This post is a mixture of three things:

  • Revision of how extension methods are resolved
  • Application of this to task awaiting in async methods
  • A rant about void not being a type

Compared with my last few posts, there’s almost nothing to do with genuine asynchronous behaviour here. It’s to do with how the language supports asynchronous behaviour, and how we can hijack that support :)

Extension methods redux

I’m sure almost all of you could recite the C# 4 spec section 7.6.5.2 off by heart, but for the few readers who can’t (Newton Microkitchen Breakfast Club, I’m looking at you) here’s a quick summary.

The compiler looks for extension methods (the ones that "pretend" to be instance methods on other types, and are declared in non-generic top-level static classes) when it comes across a method invocation expression1 and finds no applicable methods. We’ll assume we’ve got to that point.

The compiler then looks in successive contexts for extension methods. It only considers non-generic static types directly declared in namespaces (as opposed to being nested classes) but it’s the order in which the namespaces are searched which is interesting. Imagine that the compiler is looking at code in a namespace X.Y.Z. That has to be within at least one namespace declaration, and can have up three, like this:

namespace X
{
    namespace Y
    {
        namespace Z
        {
            // Code being compiled
        }
    }
}

The compiler starts with the "innermost" namespace, and works outwards to the global namespace. At each level, it first considers types within that namespace, then types within any using namespace directives within the namespace declaration. So, to give a really full example, consider this:

using UD.U0;

namespace X
{
    using UD.U1;

    namespace Y
    {
       using UD.U2;

        namespace Z
        {
            using UD.U3;

            // Code being compiled
        }
    }
}

The namespaces would be searched in this order:

  • Z
  • UD.U3
  • Y
  • UD.U2
  • X
  • UD.U1
  • "global"
  • UD.U0

Note that UD itself would not be searched. If a namespace declaration contains more than one using namespace directive, they’re considered as a set of directives – the order doesn’t matter, and all types within the referenced namespaces are considered equally.

As soon as an eligible method has been found, this brings the search to a halt – even if a "better" method might be available elsewhere. This allows us to effectively prioritise extension methods within a particular namespace by including a using namespace directive in a more deeply nested namespace declaration than the methods we want to ignore.

Async methods and extensions on Task/Task<T>

So, where am I heading with all of this? Well, I wanted to work out a way of getting the compiler to use my extension methods for Task and Task<T> instead of the ones that come in the CTP library. The GetAwaiter() methods are in a type called AsyncCtpThreadingExtensions, and they both return System.Runtime.CompilerServices.TaskAwaiter instances. You can tell this just by decompiling your own code, and see what it calls when you "await" a task.

Now, we can create our own complete awaiter methods, as shown in my previous post… but it’s potentially more useful just to be able to add diagnosis tools without changing the actual behaviour. For the sake of brevity, here are some extension methods and supporting types just for Task<T> – the full code targets the non-generic Task type as well.

using System;
using System.Runtime.CompilerServices;
using System.Threading.Tasks;

namespace JonSkeet.Diagnostics
{
    public static class DiagnosticTaskExtensions
    {
        /// <summary>
        /// Associates a task with a user-specified name before GetAwaiter is called
        /// </summary>
        public static NamedTask<T> WithName<T>(this Task<T> task, string name)
        {
            return new NamedTask<T>(task, name);
        }

        /// <summary>
        /// Gets a diagnostic awaiter for a task, based only on its ID.
        /// </summary>
        public static NamedAwaiter<T> GetAwaiter<T>(this Task<T> task)
        {
            return new NamedTask<T>(task, "[" + task.Id + "]").GetAwaiter();
        }

        public struct NamedTask<T>
        {
            private readonly Task<T> task;
            private readonly string name;

            public NamedTask(Task<T> task, string name)
            {
                this.task = task;
                this.name = name;
            }

            public NamedAwaiter<T> GetAwaiter()
            {
                Console.WriteLine("GetAwaiter called for task "{0}"", name);
                return new NamedAwaiter<T>(AsyncCtpThreadingExtensions.GetAwaiter(task), name);
            }
        }

        public struct NamedAwaiter<T>
        {
            private readonly TaskAwaiter<T> awaiter;
            private readonly string name;

            public NamedAwaiter(TaskAwaiter<T> awaiter, string name)
            {
                this.awaiter = awaiter;
                this.name = name;
            }

            public bool BeginAwait(Action continuation)
            {
                Console.WriteLine("BeginAwait called for task "{0}"…", name);
                bool ret = awaiter.BeginAwait(continuation);
                Console.WriteLine("… BeginAwait for task "{0}" returning {1}", name, ret);
                return ret;
            }

            public T EndAwait()
            {
                Console.WriteLine("EndAwait called for task "{0}"", name);
                // We could potentially report the result here
                return awaiter.EndAwait();
            }
        }
    }
}

So this lets us give a task a name for clarity (optionally), and logs when the GetAwaiter/BeginAwait/EndAwait methods get called.

The neat bit is how easy this is to use. Consider this code:

using System;
using System.Net;
using System.Threading.Tasks;

namespace Demo
{
    using JonSkeet.Diagnostics;

    class Program
    {
        static void Main(string[] args)
        {
            Task<int> task = SumPageSizes();
            Console.WriteLine("Result: {0}", task.Result);
        }

        static async Task<int> SumPageSizes()
        {
            Task<int> t1 = FetchPageSize("http://www.microsoft.com&quot;);
            Task<int> t2 = FetchPageSize("http://csharpindepth.com&quot;);

            return await t1.WithName("MS web fetch") +
                   await t2.WithName("C# in Depth web fetch");
        }

        static async Task<int> FetchPageSize(string url)
        {
            string page = await new WebClient().DownloadStringTaskAsync(url);
            return page.Length;
        }
    }
}

The JonSkeet.Diagnostics namespace effectively has higher priority when we’re looking for extension methods, so our GetAwaiter is used instead of the ones in the CTP (which we delegate to, of course).

Remove the using namespace directive for JonSkeet.Diagnostics, remove the calls to WithName, and it all compiles and runs as normal. If you don’t want to have to do anything to the code, you could put the using namespace directive within #if DEBUG / #endif and write a small extension method in the System.Threading.Tasks namespace like this:

namespace System.Threading.Tasks
{
    public static class NamedTaskExtensions
    {
        public static Task<T> WithName<T>(this Task<T> task, string name)
        {
            return task;
        }
    }
}

… and bingo, diagnostics only in debug mode. The no-op WithName method will be ignored for the higher-priority version one in debug builds, and will be harmless in a release build.

The diagnostics themselves can be quite enlightening, by the way. For example, here’s the result of the previous program:

GetAwaiter called for task "[1]"
BeginAwait called for task "[1]"…
… BeginAwait for task "[1]" returning True
GetAwaiter called for task "[2]"
BeginAwait called for task "[2]"…
… BeginAwait for task "[2]" returning True
GetAwaiter called for task "MS web fetch"
BeginAwait called for task "MS web fetch"…
… BeginAwait for task "MS web fetch" returning True
EndAwait called for task "[2]"
EndAwait called for task "[1]"
EndAwait called for task "MS web fetch"
GetAwaiter called for task "C# in Depth web fetch"
BeginAwait called for task "C# in Depth web fetch"…
… BeginAwait for task "C# in Depth web fetch" returning False
EndAwait called for task "C# in Depth web fetch"
Result: 6009

This shows us waiting to fetch both web pages, and both of those awaits being asynchronous. (Note that we launched the tasks before any diagnostics were displayed – it’s only awaiting the tasks that causes all of this to kick in.) After both of those "fetch and take the length" tasks have started, we await the result of the first one (for microsoft.com). This corresponds to task 1 – but task 2 (fetching csharpindepth.com) finishes first. When the microsoft.com page has finished fetching, the length is computed and that task completes. Now when we await the result of fetching the length of csharpindepth.com, we see that it’s already finished, and the await completes synchronously.

Obviously this was a small example, I deliberately left two tasks with just task IDs, and there could be a lot more information (such as timestamps and thread IDs, to start with) but I suspect this sort of thing could be invaluable when trying to work out what’s going on in async code.

And finally… a short rant

I’ve written all the diagnostic code twice. Not because it was wrong the first time, but because it only covered Task<T>, not Task. I couldn’t write it just on Task, because then EndAwait would have had the wrong signature… but the code was pretty much a case of "cut, paste, remove <T> everywhere".

I’ve never been terribly bothered by the void type before, and it not being a "proper" type like unit in functional programming languages. Now, I suddenly begin to see the point.

Perhaps the TPL should have introduced the Unit type before the Rx team got in there. With a single Task<T> type, I suspect there’d be significantly less code duplication in the framework (including the async CTP).

Is it enough to make me wish we didn’t have void at all? Maybe. Maybe not. Perhaps with sufficient knowledge in the CLR, there wouldn’t have to be any stack penalty for copying a "pretend" return value onto the stack every time we call a method which would currently return void. I’ll certainly be keeping an eye out for other places where it would make life easier.

Conclusion

I don’t normally advocate language tricks like the extension method "priority boost" described here. I love talking about them, but I think they’re nasty enough to avoid most of the time.

But in this case the diagnostic benefit is potentially huge! I don’t know how it would fit into the full framework – or where it would dump its diagnostics to – but I’d really like to see something like this in the final release, particularly with the ability to associate a name with a task.

Even if you don’t want to actually use this, I hope you’ve enjoyed it as an intellectual exercise and a bit of reinforcement about how GetAwait/BeginAwait/EndAwait works.


1 It has to be a method invocation on an expression, too. So if you’re writing code within an IEnumerable<T> implementation and you want to call the LINQ Count() method, you have to call this.Count() rather than just Count(), for example.

Propagating multiple async exceptions (or not)

In an earlier post, I mentioned  that in the CTP, an asynchronous method will throw away anything other than the first exception in an AggregateException thrown by one of the tasks it’s waiting for. Reading the TAP documentation, it seems this is partly expected behaviour and partly not. TAP claims (in a section about how "await" is achieved by the compiler):

It is possible for a Task to fault due to multiple exceptions, in which case only one of these exceptions will be propagated; however, the Task’s Exception property will return an AggregateException containing all of the errors.

Unfortunately, that appears not to be the case. Here’s a test program demonstrating the difference between an async method and a somewhat-similar manually written method. The full code is slightly long, but here are the important methods:

static async Task ThrowMultipleAsync()
{
    Task t1 = DelayedThrow(500);
    Task t2 = DelayedThrow(1000);
    await TaskEx.WhenAll(t1, t2);
}

static Task ThrowMultipleManually()
{
    Task t1 = DelayedThrow(500);
    Task t2 = DelayedThrow(1000);
    return TaskEx.WhenAll(t1, t2);
}

static Task DelayedThrow(int delayMillis)
{
    return TaskEx.Run(delegate {
        Thread.Sleep(delayMillis);
        throw new Exception("Went bang after " + delayMillis);
    });
}

The difference is that the async method is generating an extra task, instead of returning the task from TaskEx.WhenAll. It’s waiting for the result of WhenAll itself (via EndAwait). The results show one exception being swallowed:

Waiting for From async method
Thrown exception: 1 error(s):
Went bang after 500

Task exception: 1 error(s):
Went bang after 500

Waiting for From manual method
Thrown exception: 2 error(s):
Went bang after 500
Went bang after 1000

Task exception: 2 error(s):
Went bang after 500
Went bang after 1000

The fact that the "manual" method still shows two exceptions means we can’t blame WhenAll – it must be something to do with the async code. Given the description in the TAP documentation, I’d expect (although not desire) the thrown exception to just be a single exception, but the returned task’s exception should have both in there. That’s clearly not the case at the moment.

Waiter! There’s an exception in my soup!

I can think of one reason why we’d perhaps want to trim down the exception to a single one: if we wanted to remove the aggregation aspect entirely. Given that the async method always returns a Task (or void), I can’t see how that’s feasible anyway… a Task will always throw an AggregateException if its underlying operation fails. If it’s already throwing an AggregateException, why restrict it to just one?

My guess is that this makes it easier to avoid the situation where one AggregateException would contain another, which would contain another, etc.

To demonstrate this, let’s try to write our own awaiting mechanism, instead of using the one built into the async CTP. GetAwaiter() is an extension method, so we can just make our own extension method which has priority over the original one. I’ll go into more detail about that in another post, but here’s the code:

public static class TaskExtensions
{
    public static NaiveAwaiter GetAwaiter(this Task task)
    {
        return new NaiveAwaiter(task);
    }
}

public class NaiveAwaiter
{
    private readonly Task task;

    public NaiveAwaiter(Task task)
    {
        this.task = task;
    }

    public bool BeginAwait(Action continuation)
    {
        if (task.IsCompleted)
        {
            return false;
        }
        task.ContinueWith(_ => continuation());
        return true;
    }

    public void EndAwait()
    {
        task.Wait();
    }
}

Yes, it’s almost the simplest implementation you could come up with. (Hey, we do check whether the task is already completed…) There no scheduler or SynchronizationContext magic… and importantly, EndAwait does nothing with any exceptions. If the task throws an AggregateException when we wait for it, that exception is propagated to the generated code responsible for the async method.

So, what happens if we run exactly the same client code with these classes present? Well, the results for the first part are different:

Waiting for From async method
Thrown exception: 1 error(s):
One or more errors occurred.

Task exception: 1 error(s):
One or more errors occurred.

We have to change the formatting somewhat to see exactly what’s going on – because we now have an AggregateException containing an AggregateException. The previous formatting code simply printed out how many exceptions there were, and their messages. That wasn’t an issue because we immediately got to the exceptions we were throwing. Now we’ve got an actual tree. Just printing out the exception itself results in huge gobbets of text which are unreadable, so here’s a quick and dirty hack to provide a bit more formatting:

static string FormatAggregate(AggregateException e)
{
    StringBuilder builder = new StringBuilder();
    FormatAggregate(e, builder, 0);
    return builder.ToString();
}

static void FormatAggregate(AggregateException e, StringBuilder builder, int level)
{
    string padding = new string(‘ ‘, level);
    builder.AppendFormat("{0}AggregateException with {1} nested exception(s):", padding, e.InnerExceptions.Count);
    builder.AppendLine();
    foreach (Exception nested in e.InnerExceptions)
    {
        AggregateException nestedAggregate = nested as AggregateException;
        if (nestedAggregate != null)
        {
            FormatAggregate(nestedAggregate, builder, level + 1);
            builder.AppendLine();
        }
        else
        {
            builder.AppendFormat("{0} {1}: {2}", padding, nested.GetType().Name, nested.Message);
            builder.AppendLine();
        }
    }
}

Now we can see what’s going on better:

AggregateException with 1 nested exception(s):
AggregateException with 2 nested exception(s):
  Exception: Went bang after 500
  Exception: Went bang after 1000

Hooray – we actually have all our exceptions, eventually… but they’re nested. Now if we introduce another level of nesting – for example by creating an async method which just waits on the task created by ThrowMultipleAsync – we end up with something like this:

AggregateException with 1 nested exception(s):
AggregateException with 1 nested exception(s):
  AggregateException with 2 nested exception(s):
   Exception: Went bang after 500
   Exception: Went bang after 1000

You can imagine that for a deep stack trace of async methods, this could get messy really quickly.

However, I don’t think that losing the information is really the answer. There’s already the Flatten method in AggregateException which will flatten the tree appropriately. I’d be reasonably happy for the exceptions to be flattened at any stage, but I really don’t like the behaviour of losing them.

It does get complicated by how the async language feature has to handle exceptions, however. Only one exception can ever be thrown at a time, even though a task can have multiple exceptions set on it. One option would be for the autogenerated code to handle AggregateException differently, setting all the nested exceptions separately (in the single task which has been returned) rather than either setting the AggregateException which causes nesting (as we’ve seen above) or relying on the awaiter picking just one exception (as is currently the case). It’s definitely a decision I think the community should get involved with.

Conclusion

As we’ve seen, the current behaviour of async methods doesn’t match the TAP documentation or what I’d personally like.

This isn’t down to the language features, but it’s the default behaviour of the extension methods which provide the "awaiter" for Task. That doesn’t mean the language aspect can’t be changed, however – some responsibility could be moved from awaiters to the generated code. I’m sure there are pros and cons each way – but I don’t think losing information is the right approach.

Next up: using extension method resolution rules to add diagnostics to task awaiters.

Configuring waiting

One of the great things about working at Google is that almost all of my colleagues are smarter than me. True, they don’t generally know as much about C#, but they know about language design, and they sure as heck know about distributed/parallel/async computing.

One of the great things about having occasional contact with the C# team is that when Mads Torgersen visits London later in the week, I can introduce him to these smart colleagues. So, I’ve been spreading the word about C# 5’s async support and generally encouraging folks to think about the current proposal so they can give feedback to Mads.

One particularly insightful colleague has persistently expressed a deep concern over who gets to control how the asynchronous execution works. This afternoon, I found some extra information which looks like it hasn’t been covered much so far which may allay his fears somewhat. It’s detailed in the Task-based Asynchronous Pattern documentation, which I strongly recommend you download and read right now.

More than ever, this post is based on documentation rather than experimentation. Please take with an appropriately large grain of salt.

What’s the problem?

In a complex server handling multiple types of request and processing them asynchronously – with some local CPU-bound tasks and other IO-bound tasks – you may well not want everything to be treated equally. Some operations (health monitoring, for example) may require high priority and a dedicated thread pool, some may be latency sensitive but support load balancing easily (so it’s fine to have a small pool of medium/high priority tasks, trusting load balancing to avoid overloading the server) and some may be latency-insensitive but be server-specific – a pool of low-priority threads with a large queue may be suitable here, perhaps.

If all of this is going to work, you need to know for each asynchronous operation:

  • Whether it will take up a thread
  • What thread will be chosen, if one is required (a new one? one from a thread pool? which pool?)
  • Where the continuation will run (on the thread which initiated the asynchronous operation? the thread the asynchronous operation ran on? a thread from a particular pool?)

In many cases reusable low-level code doesn’t know this context… but in the async model, it’s that low-level code which is responsible for actually starting the task. How can we reconcile the two requirements?

Controlling execution flow from the top down

Putting the points above into the concrete context of the async features of C# 5:

  • When an async method is called, it will start on the caller’s thread
  • When it creates a task (possibly as the target of an await expression) that task has control over how it will execute
  • The awaiter created by an await expression has control (or at the very least significant influence) over how where the next part of the async method (the continuation) is executed
  • The caller gets to decide what they will do with the returned task (assuming there is one) – it may be the target of another await expression, or it may be used more directly without any further use of the new language features

Whether a task requires an extra thread really is pretty much up to the task. Either a task will be IO-bound, CPU-bound, or a mixture (perhaps IO-bound to fetch data, and then CPU-bound to process it). As far as I can tell, it’s assumed that IO-bound asynchronous tasks will all use IO completion ports, leaving no real choice available. On other platforms, there may be other choices – there may be multiple IO channels for example, some reserved for higher priority traffic than others. Although the TAP doesn’t explicitly call this out, I suspect that other platforms could create a similar concept of context to the one described below, but for IO-specific operations.

The two concepts that TAP appears to rely on (and I should be absolutely clear that I could be misreading things; I don’t know as much about the TPL that all of this is based on as I’d like) are a SynchronizationContext and a TaskScheduler. The exact difference between the two remains slightly hazy to me, as both give control over which thread delegates are executed on – but I get the feeling that SynchronizationContext is aimed at describing the thread you should return to for callbacks (continuations) and TaskScheduler is aimed at describing the thread you should run work on – whether that’s new work or getting back for a continuation. (In other words, TaskScheduler is more general than SynchronizationContext -  so you can use it for continuations, but you can also use it for other things.)

One vital point is that although these aren’t enforced, they are designed to be the easiest way to carry out work. If there are any places where that isn’t true, that probably represents an issue. For example, the TaskEx.Run method (which will be Task.Run eventually) always uses the default TaskScheduler rather than the current TaskScheduler – so tasks started in that way will always run on the system thread pool. I have doubts about that decision, although it fits in with the general approach of TPL to use a single thread pool.

If everything representing an async operation follows the TAP, it should make it to control how things are scheduled "from this point downwards" in async methods.

ConfigureAwait, SwitchTo, Yield

Various "plain static" and extension methods have been provided to make it easy to change your context within an async method.

SwitchTo allows you to change your context to the ThreadPool or a particular TaskScheduler or Dispatcher. You may not need to do any more work on a particular high priority thread until you’ve actually got your final result – so you’re happy with the continuations being executed either "inline" with the asynchronous tasks you’re executing, or on a random thread pool thread (perhaps from some specific pool).  This may also allow the new CPU-bound tasks to be scheduled appropriately too (I thought it did, but I’m no longer sure). Once you’ve got all your ducks in a row, then you can switch back for the final continuation which needs to provide the results on your original thread.

ConfigureAwait takes an existing task and returns a TaskAwaiter – essentially allowing you to control just the continuation part.

Yield does exactly what it sounds like – yields control temporarily, basically allowing for cooperative multitasking by allowing other work to make progress before continuing. I’m not sure that this one will be particularly useful, personally – it feels a little too much like Application.DoEvents. I dare say there are specialist uses though – in particular, it’s cleaner than Application.DoEvents because it really is yielding, rather than running the message pump in the current stack.

All of these are likely to be used in conjunction with await. For example (these are not expected to all be in the same method, of course!):

// Continue in custom context (may affect where CPU-bound tasks are run too)
await customScheduler.SwitchTo();

// Now get back to the dispatcher thread to manipulate the UI
await control.Dispatcher.SwitchTo();

var task = new WebClient().DownloadStringTaskAsync(url);
// Don’t bother continuing on this thread after downloading; we don’t
// care for the next bit.
await ConfigureAwait(task, flowContext: false);

foreach (Job job in jobs)
{
    // Do some work that has to be done in this thread
    job.Process();

    // Let someone else have a turn – we may have a lot to
    // get through.
    // This will be Task.Yield eventually
    await TaskEx.Yield();
}

Is this enough?

My gut feeling is that this will give enough control over the flow of the application if:

  • The defaults in TAP are chosen appropriately so that the easiest way of starting a computation is also an easily "top-down-configurable" one
  • The top-level application programmer pays attention to what they’re doing, and configures things appropriately
  • Each component programmer lower down pays attention to the TAP and doesn’t do silly things like starting arbitrary threads themselves

In other words, everyone has to play nicely. Is that feasible in a complex system? I suspect it has to be really. If you have any "rogue" elements they’ll manage to screw things up in any system which is flexible enough to meet real-world requirements.

My colleague’s concern is (I think – I may be misrepresenting him) largely that the language shouldn’t be neutral about how the task and continuation are executed. It should allow or even force the caller to provide context. That would make the context hard to ignore lower down. The route I believe Microsoft has chosen is to do this implicitly by propagating context through the "current" SynchronizationContext and TaskScheduler, in the belief that developers will honour them.

We’ll see.

Conclusion

A complex asynchronous system is like a concerto played by an orchestra. Each musician is responsible for keeping time, but they are given direction from the conductor. It only takes one viola player who wants to play fast and loud to ruin the whole effect – so everyone has to behave. How do you force the musicians to watch the conductor? How much do you trust them? How easy is it to conduct in the first place? These are the questions which are hard to judge from documentation, frankly. I’m currently optimistic that by the time C# 5 is actually released, the appropriate balance will have been struck, the default tempo will be appropriate, and we can all listen to some beautiful music. In the key of C#, of course.

Evil code – overload resolution workaround

Another quick break from asynchrony, because I can’t resist blogging about this thoroughly evil idea which came to me on the train.

Your task: to write three static methods such that this C# 4 code:

static void Main() 
{ 
    Foo<int>(); 
    Foo<string>(); 
    Foo<int?>(); 
}

resolves one call to each of them – and will act appropriately for any non-nullable value type, reference type, and nullable value type respectively.

You’re not allowed to change anything in the Main method above, and they have to just be methods – no tricks using delegate-type fields, for example. (I don’t know whether such tricks would help you or not, admittedly. I suspect not.) It can’t just call one method which then determines other methods to call at execution time – we want to resolve this at compile time.

If you want to try this for yourself, look away now. I’ve deliberately included an attempt which won’t work below, so that hopefully you won’t see the working solution accidentally.

The simple (but failed) attempt

You might initially want to try this:

class Test 
{ 
    static void Foo<T>() where T : class {} 

    static void Foo<T>() where T : struct {} 

    // Let's hope the compiler thinks this is "worse"
    // than the others because it has no constraints 
    static void Foo<T>() {} 

    static void Main() 
    { 
        Foo<int>(); 
        Foo<int?>(); 
        Foo<string>(); 
    }  
}

That’s no good at all. I wrote about why it’s no good in this very blog, last week. The compiler only checks generic constraints on the type parameters after overload resolution.

Fail.

First steps towards a solution

You may remember that the compiler does check that parameter types make sense when working out the candidate set. That gives us some hope… all we’ve got to do is propagate our desired constraints into parameters.

Ah… but we’re calling a method with no arguments. So there can’t be any parameters, right?

Wrong. We can have an optional parameter. Okay, now we’re getting somewhere. What type of parameter can we apply to force a parameter to only be valid if a generic type parameter T is a non-nullable type? The simplest option which occurs is to use Nullable – that has an appropriate constraint. So, we end up with a method like

static void Foo<T>(T? ignored = default(T?)) where T : struct {}

Okay, so that’s the first call sorted out – it will be valid for the above method, but neither of the others will.

What about the reference type parameter? That’s slightly trickier – I can’t think of any common generic types in the framework which require their type parameters to be reference types. There may be one, of course – I just can’t think of one offhand. Fortunately, it’s easy to declare such a type ourselves, and then use it in another method:

class ClassConstraint<T> where T : class {} 

static void Foo<T>(ClassConstraint<T> ignored = default(ClassConstraint<T>))
    where T : class {}

Great. Just one more to go. Unfortunately, there’s no constraint which only satisfies nullable value types… Hmm.

The awkwardness of nullable value types

We want to effectively say, “Use this method if neither of the other two work – but use the other methods in preference.” Now if we weren’t already using optional parameters, we could potentially do it that way – by introducing a single optional parameter, we could have a method which was still valid for the other calls, but would be deemed “worse” by overload resolution. Unfortunately, overload resolution takes a binary view of optional parameters: either the compiler is having to fill in some parameters itself, or it’s not. It doesn’t think that filling in two parameter is “worse” than only filling in one.

Luckily, there’s a way out… inheritance to the rescue! (It’s not often you’ll hear me say that.)

The compiler will always prefer applicable methods in a derived class to applicable methods in a base class, even if they’d otherwise be better. So we can write a parameterless method with no type constraints at all in a base class. We can even keep it as a private method, so long as we make the derived class a nested type within its own base class.

Final solution

This leads to the final code – this time with diagnostics to prove it works:

using System;
class Base 
{ 
    static void Foo<T>() 
    { 
        Console.WriteLine("nullable value type"); 
    }

    class Test : Base 
    { 
        static void Foo<T>(T? ignored = default(T?)) 
            where T : struct 
        { 
            Console.WriteLine("non-nullable value type"); 
        } 

        class ClassConstraint<T> where T : class {} 

        static void Foo<T>(ClassConstraint<T> ignored = default(ClassConstraint<T>))
            where T : class 
        { 
            Console.WriteLine("reference type"); 
        } 

        static void Main() 
        { 
            Foo<int>(); 
            Foo<string>(); 
            Foo<int?>(); 
        } 
    } 
}

And the output…

non-nullable value type 
reference type 
nullable value type

Conclusion

This is possibly the most horrible code I’ve ever written.

Please, please don’t use it in real life. Use different method names or something like that.

Still, it’s a fun little puzzle, isn’t it?

Dreaming of multiple tasks again… with occasional exceptions

Yesterday I wrote about waiting for multiple tasks to complete. We had three asynchronous tasks running in parallel, fetching a user’s settings, reputation and recent activity. Broadly speaking, there were two approaches. First we could use TaskEx.WhenAll (which will almost certainly be folded into the Task class for release):

var settingsTask = service.GetUserSettingsAsync(userId); 
var reputationTask = service.GetReputationAsync(userId); 
var activityTask = service.GetRecentActivityAsync(userId); 

await TaskEx.WhenAll(settingsTask, reputationTask, activityTask); 

UserSettings settings = settingsTask.Result; 
int reputation = reputationTask.Result; 
RecentActivity activity = activityTask.Result;

Second we could just wait for each result in turn:

var settingsTask = service.GetUserSettingsAsync(userId);  
var reputationTask = service.GetReputationAsync(userId);  
var activityTask = service.GetRecentActivityAsync(userId);  
      
UserSettings settings = await settingsTask; 
int reputation = await reputationTask; 
RecentActivity activity = await activityTask;

These look very similar, but actually they behave differently if any of the tasks fails:

  • In the first form we will always wait for all the tasks to complete; if the settings task fails within a millisecond but the recent activity task takes 5 minutes, we’ll be waiting 5 minutes. In the second form we only wait for one at a time, so if one task fails, we won’t wait for any currently-unawaited ones to complete. (Of course if the first two tasks both succeed and the last one fails, the total waiting time will be the same either way.)
  • In the first form we should probably get to find out about the errors from all the asynchronous tasks; in the second form we only see the errors from whichever task fails first.

The second point is interesting, because in fact it looks like the CTP will throw away all but the first inner exception of an aggregated exception thrown by a Task that’s being awaited. That feels like a mistake to me, but I don’t know whether it’s by design or just due to the implementation not being finished yet. I’m pretty sure this is the same bit of code (in EndAwait for Task and Task<T>) which makes sure that we don’t get multiple levels of AggregateException wrapping the original exception as it bubbles up. Personally I’d like to at least be able to find all the errors that occurred in an asynchronous operation. Occasionally, that would be useful…

… but actually, in most cases I’d really like to just abort the whole operation as soon as any task fails. I think we’re missing a method – something like WhenAllSuccessful. If any operation is cancelled or faulted, the whole lot should end up being cancelled – with that cancellation propagating down the potential tree of async tasks involved, ideally. Now I still haven’t investigated cancellation properly, but I believe that the cancellation tokens of Parallel Extensions should make this all possible. In many cases we really need success for all of the operations – and we would like to communicate any failures back to our caller as soon as possible.

Now I believe that we could write this now – somewhat inefficiently. We could keep a collection of tasks which still haven’t completed, and wait for any of them to complete. At that point, look for all the completed ones in the set (because two could complete at the same time) and see whether any of them have faulted or been cancelled. If so, cancel the remaining operations and rethrow the exception (aka set our own task as faulted). If we ever get to the stage where all the tasks have completed – successfully – we just return so that the results can be fetched.

My guess is that this could be written more efficiently by the PFX team though. I’m actually surprised that there isn’t anything in the framework that does this. That usually means that either it’s there and I’ve missed it, or it’s not there for some terribly good reason that I’m too dim to spot. Either way, I’d really like to know.

Of course, all of this could still be implemented as extension methods on tuples of tasks, if we ever get language support for tuples. Hint hint.

Conclusion

It’s often easy to concentrate on the success path and ignore possible failure in code. Asynchronous operations make this even more of a problem, as different things could be succeeding and failing at the same time.

If you do need to write code like the second option above, consider ordering the various "await" statements so that the expected time taken in the failure case is minimized. Always consider whether you really need all the results in all cases… or whether any failure is enough to mess up your whole operation.

Oh, and if you know the reason for the lack of something like WhenAllSuccessful, please enlighten me in the comments :)

C# in Depth 2nd edition: ebook available, but soon to be updated

Just a quick interrupt while I know many of you are awaiting more asynchronous fun…

Over the weekend, the ebook of C# in Depth 2nd edition went out – and a mistake was soon spotted. Figure 2.2 was accidentally replaced by figure 13.1. I’ve included it in the book’s errata but we’re hoping to issue another version of the ebook shortly. Fortunately this issue only affects the ebook version – the files shipped to the printer are correct. Speaking of which, I believe the book should come off the printing press some time this week, so it really won’t be much longer before you can all physically scribble in the margins.

We’re going to give it a couple of days to see if anything else is found (and I’m going to check all the figures to see if the same problem has manifested itself elsewhere) – but I expect we’ll be issuing the second "final" version of the ebook late this week.

EDIT: A lot of people have asked about an epub/mobi version of the ebook. I don’t have any dates on it, but I know it’s something Manning is keen on, and there’s a page declaring that all new releases will have an epub/mobi version. I’m not sure how that’s all going to pan out just yet, but rest assured that it’s important to me too.

Control flow redux: exceptions in asynchronous code

Warning: as ever, this is only the result of reading the spec and experimentation. I may well have misinterpreted everything. Eric Lippert has said that he’ll blog about exceptions soon, but I wanted to put down my thoughts first, partly to see the difference between what I’ve worked out and what the real story is.

So far, I’ve only covered "success" cases – where tasks complete without being cancelled or throwing exceptions. I’m leaving cancellation for another time, but let’s look at what happens when exceptions are thrown by async methods.

What happens when an async method throws an exception?

There are three types of async methods:

  • Ones that are declared to return void
  • Ones that are declared to return Task
  • Ones that are declared to return Task<T> for some T

The distinction between Task and Task<T> isn’t important in terms of exceptions. I’ll call async methods that return Task or Task<T> taskful methods, and ones that return void taskless methods. These aren’t official terms and they’re not even nice terms, but I don’t have any better ones for the moment.

It’s actually pretty easy to state what happens when an exception is thrown – but the ramifications are slightly more complicated:

  • If code in a taskless method throws an exception, the exception propagates up the stack
  • If code in a taskful method throws an exception, the exception is stored in the task, which transitions to the faulted state
    • If we’re still in the original context, the task is then returned to the caller
    • If we’re in a continuation, the method just returns

The inner bullet points are important here. At any time it’s executing, an async method is either still in its original context – i.e. the caller is one level up the stack – or it’s in a continuation, which takes the form of an Action delegate. In the latter case, we must have previously returned control to the caller, usually returning a task (in the "taskful method" case).

This means that if you call a taskful method, you should expect to be given a task without an exception being thrown. An exception may well be thrown if you wait for the result of that task (possibly via an await operation) but the method itself will complete normally. (Of course, there’s always the possibility that we’ll run out of memory while constructing the task, or other horrible situations. I think it’s fair to classify those as pathological and ignore them for most applications.)

A taskless method is much more dangerous: not only might it throw an exception to the original caller, but it might alternatively throw an exception to whatever calls the continuation. Note that it’s the awaiter that gets to determine that for any await operation… it may be an awaiter which uses the current SynchronizationContext for example, or it may be one which always calls the continuation on a new thread… or anything else you care to think of. In some cases, that may be enough to bring down the process. Maybe that’s what you want… or maybe not. It’s worth being aware of.

Here’s a trivial app to demonstrate the more common taskful behaviour – although it’s unusual in that we have an async method with no await statements:

using System;
using System.Threading.Tasks;

public class Test
{
    static void Main()
    {
        Task task = GoBangAsync();
        Console.WriteLine("Method completed normally");
        task.Wait();
    }
    
    static async Task GoBangAsync()
    {
        throw new Exception("Bang!");
    }
}

And here’s the result:

Method completed normally

Unhandled Exception: System.AggregateException: One or more errors occurred. 
        —> System.Exception: Bang!
   at Test.<GoBangAsync>d__0.MoveNext()
   — End of inner exception stack trace —
   at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
   at Test.Main()

As you can see, the exception was only thrown when we waited for the asynchronous task to complete – and it was wrapped in an AggregateException which is the normal behaviour for tasks.

If an awaited task throws an exception, that is propagated to the async method which was awaiting it. You might expect this to result in an AggregateException wrapping the original AggregateException and so on, but it seems that something is smart enough to perform some unwrapping. I’m not sure what yet, but I’ll investigate further when I get more time. EDIT: I’m pretty sure it’s the EndAwait code used when you await a Task or Task<T>. There’s certainly no mention of AggregateException in the spec, so I don’t believe the compiler-generated code does any of this.

How eagerly can we validate arguments?

If you remember, iterator blocks have a bit of a usability problem when it comes to argument validation: because the iterator code is only run when the caller first starts iterating, it’s hard to get eager validation. You basically need to have one non-iterator-block method which validates the arguments, then calls the "real" implementation with known-to-be valid arguments. (If this doesn’t ring any bells, you might want to read this blog post, where I’m coming up with an implementation of LINQ’s Where method.)

We’re in a similar situation here, if we want arguments to be validated eagerly, causing an exception to be thrown directly to the caller. As an example of this, what would you expect this code to do? (Note that it doesn’t involve us writing any async methods at all.)

using System;
using System.Net;
using System.Threading.Tasks;

class Test
{
    static void Main()
    {
        Uri uri = null;
        Task<string> task = new WebClient().DownloadStringTaskAsync(uri);
    }
}

It could throw an exception eagerly, or it could set the exception into the return task. In many cases this will have a very similar effect – if you call DownloadStringTaskAsync as part of an await statement, for example. But it’s something you should be aware of anyway, as sometimes you may well want to call such methods outside the context of an async method.

In this particular case, the exception is thrown eagerly – so even though we’re not trying to wait on a task, the above program blows up. So, how could we achieve the same thing?

First let’s look at the code which wouldn’t work:

// This will only throw the exception when the caller waits
// on the returned task.
public async Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    // Real implementation goes here.
    return "Just a dummy implementation";
}

The problem is that we’re in an async method, so the compiler is writing code to catch any exceptions we throw, and propagate them through the task instead. We can get round this by using exactly the same trick as with iterator blocks – using a first non-async method which then calls an async method after validating the arguments:

public Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    return DownloadStringTaskAsyncImpl(uri);
}
    
public async Task<string> DownloadStringTaskAsyncImpl(Uri uri)
{
    // Real implementation goes here.
    return "Just a dummy implementation";
}

There’s a nicer solution though – because C# 5 allows us to make anonymous functions (anonymous methods or lambda expressions) asynchronous too. So we can create a delegate which will return a task, and then call it:

public Task<string> DownloadStringTaskAsync(Uri uri)
{
    if (uri == null)
    {
        throw new ArgumentNullException("uri");
    }
        
    // Good, we’ve got an argument… now we can use it.
    Func<Task<string>> taskBuilder = async delegate {
        // Real implementation goes here.
        return "Just a dummy implementation";
    };
    return taskBuilder();
}

This is slightly neater for methods which don’t need an awful lot of code. For more involved methods, it’s quite possibly worth using the "split the method in two" approach instead.

Conclusion

The general exception flow in asynchronous methods is actually reasonably straightforward – which is a good job, as normally error handling in asynchronous flows is a pain.

You need to be aware of the consequences of writing (or calling) a taskless asynchronous method… I expect that the vast majority of asynchronous methods and delegates will be taskful ones.

Finally, you also need to work out when you want exceptions to be thrown. If you want to perform argument validation, decide whether it should throw exceptions eagerly – and if so, use one of the patterns shown above. (I haven’t spent much time thinking about which approach is "better" yet – I generally like eager argument validation, but I also like the consistency of all errors being propagated through the task.)

Next up: dreaming of multiple possibly faulting tasks.