Configuring waiting

One of the great things about working at Google is that almost all of my colleagues are smarter than me. True, they don’t generally know as much about C#, but they know about language design, and they sure as heck know about distributed/parallel/async computing.

One of the great things about having occasional contact with the C# team is that when Mads Torgersen visits London later in the week, I can introduce him to these smart colleagues. So, I’ve been spreading the word about C# 5’s async support and generally encouraging folks to think about the current proposal so they can give feedback to Mads.

One particularly insightful colleague has persistently expressed a deep concern over who gets to control how the asynchronous execution works. This afternoon, I found some extra information which looks like it hasn’t been covered much so far which may allay his fears somewhat. It’s detailed in the Task-based Asynchronous Pattern documentation, which I strongly recommend you download and read right now.

More than ever, this post is based on documentation rather than experimentation. Please take with an appropriately large grain of salt.

What’s the problem?

In a complex server handling multiple types of request and processing them asynchronously – with some local CPU-bound tasks and other IO-bound tasks – you may well not want everything to be treated equally. Some operations (health monitoring, for example) may require high priority and a dedicated thread pool, some may be latency sensitive but support load balancing easily (so it’s fine to have a small pool of medium/high priority tasks, trusting load balancing to avoid overloading the server) and some may be latency-insensitive but be server-specific – a pool of low-priority threads with a large queue may be suitable here, perhaps.

If all of this is going to work, you need to know for each asynchronous operation:

  • Whether it will take up a thread
  • What thread will be chosen, if one is required (a new one? one from a thread pool? which pool?)
  • Where the continuation will run (on the thread which initiated the asynchronous operation? the thread the asynchronous operation ran on? a thread from a particular pool?)

In many cases reusable low-level code doesn’t know this context… but in the async model, it’s that low-level code which is responsible for actually starting the task. How can we reconcile the two requirements?

Controlling execution flow from the top down

Putting the points above into the concrete context of the async features of C# 5:

  • When an async method is called, it will start on the caller’s thread
  • When it creates a task (possibly as the target of an await expression) that task has control over how it will execute
  • The awaiter created by an await expression has control (or at the very least significant influence) over how where the next part of the async method (the continuation) is executed
  • The caller gets to decide what they will do with the returned task (assuming there is one) – it may be the target of another await expression, or it may be used more directly without any further use of the new language features

Whether a task requires an extra thread really is pretty much up to the task. Either a task will be IO-bound, CPU-bound, or a mixture (perhaps IO-bound to fetch data, and then CPU-bound to process it). As far as I can tell, it’s assumed that IO-bound asynchronous tasks will all use IO completion ports, leaving no real choice available. On other platforms, there may be other choices – there may be multiple IO channels for example, some reserved for higher priority traffic than others. Although the TAP doesn’t explicitly call this out, I suspect that other platforms could create a similar concept of context to the one described below, but for IO-specific operations.

The two concepts that TAP appears to rely on (and I should be absolutely clear that I could be misreading things; I don’t know as much about the TPL that all of this is based on as I’d like) are a SynchronizationContext and a TaskScheduler. The exact difference between the two remains slightly hazy to me, as both give control over which thread delegates are executed on – but I get the feeling that SynchronizationContext is aimed at describing the thread you should return to for callbacks (continuations) and TaskScheduler is aimed at describing the thread you should run work on – whether that’s new work or getting back for a continuation. (In other words, TaskScheduler is more general than SynchronizationContext -  so you can use it for continuations, but you can also use it for other things.)

One vital point is that although these aren’t enforced, they are designed to be the easiest way to carry out work. If there are any places where that isn’t true, that probably represents an issue. For example, the TaskEx.Run method (which will be Task.Run eventually) always uses the default TaskScheduler rather than the current TaskScheduler – so tasks started in that way will always run on the system thread pool. I have doubts about that decision, although it fits in with the general approach of TPL to use a single thread pool.

If everything representing an async operation follows the TAP, it should make it to control how things are scheduled "from this point downwards" in async methods.

ConfigureAwait, SwitchTo, Yield

Various "plain static" and extension methods have been provided to make it easy to change your context within an async method.

SwitchTo allows you to change your context to the ThreadPool or a particular TaskScheduler or Dispatcher. You may not need to do any more work on a particular high priority thread until you’ve actually got your final result – so you’re happy with the continuations being executed either "inline" with the asynchronous tasks you’re executing, or on a random thread pool thread (perhaps from some specific pool).  This may also allow the new CPU-bound tasks to be scheduled appropriately too (I thought it did, but I’m no longer sure). Once you’ve got all your ducks in a row, then you can switch back for the final continuation which needs to provide the results on your original thread.

ConfigureAwait takes an existing task and returns a TaskAwaiter – essentially allowing you to control just the continuation part.

Yield does exactly what it sounds like – yields control temporarily, basically allowing for cooperative multitasking by allowing other work to make progress before continuing. I’m not sure that this one will be particularly useful, personally – it feels a little too much like Application.DoEvents. I dare say there are specialist uses though – in particular, it’s cleaner than Application.DoEvents because it really is yielding, rather than running the message pump in the current stack.

All of these are likely to be used in conjunction with await. For example (these are not expected to all be in the same method, of course!):

// Continue in custom context (may affect where CPU-bound tasks are run too)
await customScheduler.SwitchTo();

// Now get back to the dispatcher thread to manipulate the UI
await control.Dispatcher.SwitchTo();

var task = new WebClient().DownloadStringTaskAsync(url);
// Don’t bother continuing on this thread after downloading; we don’t
// care for the next bit.
await ConfigureAwait(task, flowContext: false);

foreach (Job job in jobs)
{
    // Do some work that has to be done in this thread
    job.Process();

    // Let someone else have a turn – we may have a lot to
    // get through.
    // This will be Task.Yield eventually
    await TaskEx.Yield();
}

Is this enough?

My gut feeling is that this will give enough control over the flow of the application if:

  • The defaults in TAP are chosen appropriately so that the easiest way of starting a computation is also an easily "top-down-configurable" one
  • The top-level application programmer pays attention to what they’re doing, and configures things appropriately
  • Each component programmer lower down pays attention to the TAP and doesn’t do silly things like starting arbitrary threads themselves

In other words, everyone has to play nicely. Is that feasible in a complex system? I suspect it has to be really. If you have any "rogue" elements they’ll manage to screw things up in any system which is flexible enough to meet real-world requirements.

My colleague’s concern is (I think – I may be misrepresenting him) largely that the language shouldn’t be neutral about how the task and continuation are executed. It should allow or even force the caller to provide context. That would make the context hard to ignore lower down. The route I believe Microsoft has chosen is to do this implicitly by propagating context through the "current" SynchronizationContext and TaskScheduler, in the belief that developers will honour them.

We’ll see.

Conclusion

A complex asynchronous system is like a concerto played by an orchestra. Each musician is responsible for keeping time, but they are given direction from the conductor. It only takes one viola player who wants to play fast and loud to ruin the whole effect – so everyone has to behave. How do you force the musicians to watch the conductor? How much do you trust them? How easy is it to conduct in the first place? These are the questions which are hard to judge from documentation, frankly. I’m currently optimistic that by the time C# 5 is actually released, the appropriate balance will have been struck, the default tempo will be appropriate, and we can all listen to some beautiful music. In the key of C#, of course.

13 thoughts on “Configuring waiting”

  1. I have also been thinking of this. The TPL doesn’t address passing “hints” to the task-creating Async methods – this could include both TaskCreationOptions and TaskScheduler.

    However, this would really raise the bar for Async method implementations – they’d have to be carefully written to allow this level of flexibility.

    If you haven’t taken a look at the task schedulers in the pfx extensions, I think you’d find a lot of interesting examples in them:
    http://blogs.msdn.com/b/pfxteam/archive/2010/04/04/9990342.aspx
    The ConcurrentExclusiveInterleave task scheduler is part of the Async CTP Dataflow (as ConcurrentExclusiveSchedulerPair).

    Like

  2. Hi Jon, thanks for an insightful post. Mads’ visit is in the worst possible time – I live in London now, but will be away later this week!

    Anyway, I think it is useful to quote one bit from the TAP documentation:

    _If a SynchronizationContext is associated with the thread executing the asynchronous method at the time of suspension (such that SynchronizationContext.Current is non-null), the resumption of the asynchronous method will take place on that same SynchronizationContext through usage of the context’s Post method._

    If I understand it correctly, this means that the task (created using await) will continue to run in the same SynchronizationContext. This may be a set of threads such as thread pool or a single thread (e.g. the GUI thread) or some other resource.

    This is probably quite related to the section “How is background work started?” of my recent blog post http://tomasp.net/blog/async-csharp-differences.aspx.

    I wrote a few things about the starting behavior because the default C# option corresponds to ‘StartImmediate’ in F#, but you can configure it to behave like F# ‘Start’ (which doesn’t return to the original context).

    Like

  3. I’ve had lots of conversations around things like thread priority and having ThreadPool instances. Every time when every scenario is thought through to the end the same conclusion is reached. Having one thread pool with all threads being equal is best solution.
    I don’t think that you should be worried about letting one task get bumped up in priority over another. Design a system which allows for all possible orderings. Doing anything else will result in bugs.
    If your application has instance thread pools, that results in having thread pools, with threads in them, not being used, when other thread pools have more work items than they can keep up with.
    Understanding the intention of the ThreadPool (executing short work items which don’t block), and then designing around that creates the least resource usage, but scalable solution.
    If everyone had to think about what priority their task ran as they would think “My task is the most important thing in the world and deserves the highest of priorities”. We can’t help it, we’re human, the thing we’re working on at that point in time is always the most important thing.

    Like

  4. @Tomas: I’m not quite sure I follow you, because await itself doesn’t create a task – it creates a continuation to run *after* the task has completed. But yes, the SynchronizationContext is key to where the continuation is run. What controls where any new CPU-bound tasks are run is harder to find. Obviously it depends on what the code within the method does, but the information is less clear.

    There’s a PDF/PowerPoint document somewhere (I can’t find the link right now) which goes through design decisions such as the F#/C# cold/hot scenarios, which is good.

    @jader3rd: That’s why it needs to be up to the high level coordinator to decide how each job should really be run… so that the job itself doesn’t get to choose that it’s the most important thing in the world.

    Obviously I can’t go into details of internal Google architecture, but on the projects I’ve worked on, it’s been pretty much a requirement to have multiple thread pools.

    Like

  5. @Stephen: My hope would be that the tasks *wouldn’t* have to be carefully written… that the simplest option would do “the right thing” where the caller could decide that. It’s definitely a tricky business, but I think it’s an area which will need addressing. I haven’t had a close look at the DataFlow bits yet – just skimmed the docs.

    Like

  6. You gonna think this is an offense but it isn’t… is just amazing how the heck do you manage, to be able to be the first guy answering in SOF, blogging, tweetering and work at Google in 24H of the day.

    It wouldn’t surprise me that you do even more stuff in between.

    I look to my day, i’m frustrated, the work i needed to finish is at its half, i eat and sleep “running” and I barely have time to read your updates… just amazing!

    Keep up the good work by posting and sharing all your experiments and ideas.

    Regards,
    byte_slave

    Like

  7. The difference between SynchronizationContext and TaskScheduler is that SynchronizationContext is a generalization of the ISynchronizeInvoke interface from System.ComponentModel, and is intended for similar purposes – making sure that code which has thread affinity is executed on the right thread. TaskScheduler is roughly a generalization of ThreadPool but in a task-oriented (i.e. work item) fashion.

    What complicates matters is that you can get a TaskScheduler instance which dispatches all its tasks in a particular SynchronizationContext (i.e. serially) with the FromCurrentSynchronizationContext method.

    Like

  8. “it only takes one viola player who wants to play fast and loud to ruin the whole effect – so everyone has to behave”

    This is not specific to asynchronous programming, when you reuse someone else’s library, or call someone else’s code, if the code you’re calling is poorly written, it will screw you up. For exemple, if you call a slow sort function from your code, it will make *your* code slow. There is really no difference here.

    The default behavior of await should work fine without configuration in 99.9% of the situations, though. It continues on the same synchronization context if there is one when await is used, which is the least surprising behavior, and uses the default scheduler (Thread pool) otherwise, which is really really good in any sort of situations. The only situation where I see that can be useful is switching from the UI thread to the thread pool for long running computation to avoid blocking the UI thread.

    Like

  9. @Flavien: I entirely agree with your point about regular libraries. That’s pretty much what I said to my colleague :)

    And yes, for *most* apps one thread pool and the default behaviour is probably good enough. I’ll be interested to ask Mads whether TPL+async should be good enough for companies who want to write services like the ones we write at Google though, where more control really *is* needed.

    Like

  10. I’m afraid I’m not sure what the fuss is all about here. The Microsoft folks seem to me to be being reasonably clear that the new TAP features in the language are implementation-agnostic.

    Yes, there are some default behaviors provided by .NET, but the language doesn’t really care about that. It has its rules about the “awaiter”, and specific methods that are required to exist, but beyond that, C# doesn’t care.

    So, want some specific behavior from your async methods? Just make sure you’re using your own custom implementation that can provide that level of control.

    Ironically, the position you represent your colleague as having seems exactly wrong to me. A language that isn’t neutral about how the task and continuation are executed winds up imposing its own constraints on you. You _do_ want a language that’s neutral about those things, because that allows you to provide the implementation details that are best-suited to your scenario.

    And indeed, it seems to me that all the concerns being described here are in fact implementation details. And it really seems to me that a computer language should be as neutral about implementation details as it can be, focusing instead on the semantics of what the programmer wants to get done.

    I’m also puzzled by this statement: “As far as I can tell, it’s assumed that IO-bound asynchronous tasks will all use IO completion ports, leaving no real choice available.” I don’t see anything in the new C# features that make any such assumption. It’s entirely up to the async method to decide how the task will be carried out, including i/o tasks. By wrapping the existing Begin…/End… pattern, naturally i/o done the conventional way is going to use IOCP (and this is a very good thing). But if you don’t want IOCP to be used, you just implement the async method some other way. It doesn’t _have_ to call the asynchronous methods.

    Anyway, this all seems like much ado about thing. Which, I guess, should be reassuring to anyone concerned about it. :)

    Like

  11. @Peter: Responding to your points one at a time:

    – Yes, I agree that the language is agnostic on most fronts. It requires Task/Task, but I’m comfortable with that. And yes, you can make sure that all of your own tasks use a specific implementation, although due to namespace clashes it’s at least *somewhat awkward* to provide your own awaiting behaviour. It’s doable with tricks (as per other posts) but a little awkeward.

    – In terms of my colleague’s view on the language agnosticism: I’m with you, actually. I think in this respect TAP is correct. However, he’s a smart cookie and if he’s concerned, I want to talk about those concerns, see if they resonate with anyone else, try to understand them better etc.

    – The IO-bound async tasks not having choice statement is indeed not codified in the language anywhere, and is up to the implementation. I was merely trying to say that with the assumed default position of TAP, there are fewer choices to make.

    – Whether it’s much ado or not… well, I still think there’s a kernel of genuine concern. Yes, you can replace almost everything – but I think it makes sense for TPL itself to at least try to provide you with everything you need to start with. (And bear in mind that you *can’t* replace the Task type. If you need something that doesn’t provide, you’re in trouble.) I still think the idea of context which is propagated to subtrees of async tasks is important – and reading some feedback from others, it’s potentially important for reasons beyond scheduling, too. (In particular, it would be useful for diagnostic purposes. See http://research.google.com/pubs/pub36356.html for example.)

    But your point is also taken that many of these ideas aren’t entirely baked – I’m voicing concerns/opinions as they occur to me, at the moment, thinking that MS would likely want that feedback ASAP for two reasons:
    – It gives more time to change things that need changing
    – It gives an idea of what other people might be thinking, and therefore what messaging may be required for things that won’t be changed.

    Like

Leave a comment