In the last post I showed a method to implement "majority voting" for tasks, allowing a result to become available as soon as possible. At the end, I mentioned that I was reasonably confident that it worked because of the unit tests… but I didn’t show the tests themselves. I felt they deserved their own post, as there’s a bigger point here: it’s possible to unit test async code. At least sometimes.
Testing code involving asynchrony is generally a pain. Introducing the exact order of events that you want is awkward, as is managing the threading within tests. With a few benefits with async methods:
- We know that the async method itself will only execute in a single thread at a time
- We can control the thread in which the async method will execute, if it doesn’t configure its awaits explicitly
- Assuming the async method returns Task or Task<T>, we can check whether or not it’s finished
- Between Task<T> and TaskCompletionSource<T>, we have a way of injecting tasks that we understand
Now in our sample method we have the benefit of passing in the tasks that will be awaited – but assuming you’re using some reasonably testable API to fetch any awaitables within your async method, you should be okay. (Admittedly in the current .NET framework that excludes rather a lot of classes… but the synchronous versions of those calls are also generally hard to test too.)
The plan
For our majority tests, we want to be able to see what happens in various scenarios, with tasks completing at different times and in different ways. Looking at the test cases I’ve implemented I have the following tests:
- NullSequenceOfTasks
- EmptySequenceOfTasks
- NullReferencesWithinSequence
- SimpleSuccess
- InputOrderIsIrrelevant
- MajorityWithSomeDisagreement
- MajorityWithFailureTask
- EarlyFailure
- NoMajority
I’m not going to claim this is a comprehensive set of possible tests – it’s a proof of concept more than anything else. Let’s take one test as an example: MajorityWithFailureTask. The aim of this is to pass three tasks (of type Task<string>) into the method. One will give a result of "x", the second will fail with an exception, and the third will also give a result of "x". The events will occur in that order, and only when all three results are in should the returned task complete, at which point it will also have a success result of "x".
So, the tricky bit (compared with normal testing) is introducing the timing. We want to make it appear as if tasks are completing in a particular order, at predetermined times, so we can check the state of the result between events.
Introducing the TimeMachine class
Okay, so it’s a silly name. But the basic idea is to have something to control the logical flow of time through our test. We’re going to ask the TimeMachine to provide us with tasks which will act in a particular way at a given time, and then when we’ve started our async method we can then ask it to move time forward, letting the tasks complete as they go. It’s probably best to look at the code for MajorityWithFailureTask first, and then see what the implementation of TimeMachine looks like. Here’s the test:
public void MajorityWithFailureTask()
{
var timeMachine = new TimeMachine();
// Second task gives a different result
var task1 = timeMachine.AddSuccessTask(1, "x");
var task2 = timeMachine.AddFaultingTask<string>(2, new Exception("Bang!"));
var task3 = timeMachine.AddSuccessTask(3, "x");
var resultTask = MoreTaskEx.WhenMajority(task1, task2, task3);
Assert.IsFalse(resultTask.IsCompleted);
// Only one result so far – no consensus
timeMachine.AdvanceTo(1);
Assert.IsFalse(resultTask.IsCompleted);
// Second result is a failure
timeMachine.AdvanceTo(2);
Assert.IsFalse(resultTask.IsCompleted);
// Third result gives majority verdict
timeMachine.AdvanceTo(3);
Assert.AreEqual(TaskStatus.RanToCompletion, resultTask.Status);
Assert.AreEqual("x", resultTask.Result);
}
As you can see, there are two types of method:
- AddSuccessTask / AddFaultingTask / AddCancelTask (not used here) – these all take the time at which they’re going to complete as their first parameter, and the method name describes the state they’ll reach on completion. The methods return the task created by the time machine, ready to pass into the production code we’re testing.
- AdvanceTo / AdvanceBy (not used here) – make the time machine "advance time", completing pre-programmed tasks as it goes. When those tasks complete, any continuations attached to them also execute, which is how the whole thing hangs together.
Now forcing tasks to complete is actually pretty simple, if you build them out of TaskCompletionSource<T> to start with. So all we need to do is keep our tasks in "time" order (which I achieve with SortedList), and then when we’re asked to advance time we move through the list and take the appropriate action for all the tasks which weren’t completed before, but are now. I represent the "appropriate action" as a simple Action, which is built with a lambda expression from each of the Add methods. It’s really simple:
{
private int currentTime = 0;
private readonly SortedList<int, Action> actions = new SortedList<int, Action>();
public int CurrentTime { get { return currentTime; } }
public void AdvanceBy(int time)
{
AdvanceTo(currentTime + time);
}
public void AdvanceTo(int time)
{
// Okay, not terribly efficient, but it’s simple.
foreach (var entry in actions)
{
if (entry.Key > currentTime && entry.Key <= time)
{
entry.Value();
}
}
currentTime = time;
}
public Task<T> AddSuccessTask<T>(int time, T result)
{
TaskCompletionSource<T> tcs = new TaskCompletionSource<T>();
actions[time] = () => tcs.SetResult(result);
return tcs.Task;
}
public Task<T> AddCancelTask<T>(int time)
{
TaskCompletionSource<T> tcs = new TaskCompletionSource<T>();
actions[time] = () => tcs.SetCanceled();
return tcs.Task;
}
public Task<T> AddFaultingTask<T>(int time, Exception e)
{
TaskCompletionSource<T> tcs = new TaskCompletionSource<T>();
actions[time] = () => tcs.SetException(e);
return tcs.Task;
}
}
Okay, that’s a fair amount of code for a blog posts (and yes, it could do with some doc comments etc!) but considering that it makes life testable, it’s pretty simple.
So, is that it?
It works on my machine… with my test runner… in simple cases…
When I first ran the tests using TimeMachine, they worked almost immediately. This didn’t surprise me nearly as much as it should have done. You see, when the tests execute, they use async/await in the normal way – which means the continuations are scheduled on "the current task scheduler". I have no idea what the current task scheduler is in unit tests. Or rather, it feels like something which is implementation specific. It could easily have worked when running the tests from ReSharper, but not from NCrunch, or not from the command line NUnit test runner.
As it happens, I believe all of these run tests on thread pool threads with no task scheduler allocated, which means that the continuation is attached to the task to complete "in-line" – so when the TimeMachine sets the result on a TaskCompletionSource, the continuations execute before that call returns. That means everything happens on one thread, with no ambiguity or flakiness – yay!
However, there are two problems:
- The words "I believe" aren’t exactly confidence-inspiring when it comes to testing that your software works correctly.
- Our majority voting code only ever sees one completed task at a time – we’re not testing the situation where several tasks complete so quickly together that the continuation doesn’t get chance to run before they’ve all finished.
Both of these are solvable with a custom TaskScheduler or SynchronizationContext. Without diving into the docs, I’m not sure yet which I’ll need, but the aim will be:
- Make TimeMachine implement IDisposable
- In the constructor, set the current SynchronizationContext (or TaskScheduler) to a custom one having remembered what the previous one was
- On disposal, reset the context
- Make the custom scheduler keep a queue of jobs, such that when we’re asked to advance to time T, we complete all the appropriate tasks but don’t execute any continuations, then we execute all the pending continuations.
I don’t yet know how hard it will be, but hopefully the Parallel Extensions Samples will help me.
Conclusion
I’m not going to claim this is "the" way of unit testing asynchronous methods. It’s clearly a proof-of-concept implementation of what can only be called a "test framework" in the loosest possible sense. However, I hope it gives an example of a path we might take. I’m looking forward to seeing what others come up with, along with rather more polished implementations.
Next time, I’m going to shamelessly steal an idea that a reader mailed me (with permission, of course). It’s insanely cool, simple and yet slightly brain-bending, and I suspect will handy in many situations. Love it.