All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

LINQ to Rx: second impressions

My previous post had the desired effect: it generated discussion on the LINQ to Rx forum, and Erik and Wes very kindly sent me a very detailed response too. There’s no better way to cure ignorance than to display it to the world.

Rather than regurgitating the email verbatim, I’ve decided to try to write it in my own words, with extra thoughts where appropriate. That way if I’ve misunderstood anything, I may be corrected – and the very act of trying to explain all this is likely to make me explore it more deeply than I would otherwise.

I’m leaving out the bits I don’t yet understand. One of the difficulties with LINQ to Rx at the moment is that the documentation is somewhat sparse – there are loads of videos, and at least there is a CHM file for each of the assemblies bundled in the framework, but many methods just have a single sentence of description. This is entirely understandable – the framework is still in flux, after all. I’d rather have the bits but sparse docs than immaculate docs for a framework I can’t play with – but it makes it tricky to go deeper unless you’ve got time to experiment extensively. There’s an rxwiki site which looks like it may be the community’s attempt to solve this problem – but it needs a bit more input, I think. When I get a bit of time to breathe, I’d like to try to contribute there.

The good news is that I don’t think there were any mechanical aspects that I got definitively wrong in what I wrote… but the bad news is that I wasn’t thinking in Rx terms. We’ll look at the different aspects separately.

Subscriptions and collections

My first "complaint" was about the way that IEnumerable<T>.ToObservable() worked. Just to recap, I was expecting a three stage startup process:

  • Create the observable
  • Subscribe any observers
  • Tell the observable to "start"

Instead, as soon as an observer subscribes, the observable publishes everything in the sequence to it (on a different thread, by default). Separate calls to Subscribe make the observable iterate over the sequence multiple times.

Now, my original viewpoint makes sense if you think of Subscribe as being like an event subscription. It feels like something which should be passive: another viewer turning on their television to watch a live broadcast.

However, as soon as you think of IObservable.Subscribe as being the dual of IEnumerable.GetEnumerator, the Rx way makes more sense. Each call to GetEnumerator starts the sequence from scratch, and so does each call to Subscribe. This is more like inserting a disc into the DVD player – you’re still watching something, but there’s a more active element to it. You put the DVD in, it starts playing. I guess following the analogy further would make my suggested model more like a PVR :)

Additionally, this "subscription as an action" model makes more sense of methods like Return and Repeat, and also works better as a reusable object: my own idea of "now push the collection" feels dreadfully stateful: why can’t I push the collection twice? What happens if an observer subscribes after I’ve pushed?

I suspect this will trip up many an Rx neophyte; the video Wes recorded on hot and cold observables should help. Admittedly I’d already watched it before writing the blog post, so I’ve no excuse…

The subscription model can effectively be modified via composition though; using Subject (as per the blog post), AsyncSubject (which remembers the first value it sees, and only yields that), BehaviorSubject (which remembers the last value it’s seen), and ReplaySubject (which remembers everything it sees, optionally limited by a buffer) you can do quite a bit.

Wes included in his email a StartableObservable which one could start and stop. I’d come up with a slightly similar idea at home, an ObservableSequence (not nearly such a good name) but which was limited to sequences: effectively it made the steps listed above explicit for a pull sequence. The code Wes provided was completely isolated from IEnumerable<T> – you would create a StartableObservable from any existing observable, then subscribe to it, then start it. It uses a Subject to effectively collect the subscriptions – starting the observable merely subscribed the subject to the original observable passed into the constructor.

The difference between Wes’s solution and mine is more fundamental than whether his is more general-purpose than mine or not (although it clearly is). Wes didn’t have to go outside the world of Rx at all. All the building blocks were there, he just put them together – and ended up with another building block, ready to be used with the rest. That’s a common theme in this blog post :)

Asynchronous aggregation

I did get one thing right in my previous post: my suggestion that there should be asynchronous versions of the aggregation operators is apparently not a bad one. We may see the framework come with such things in the future… but they won’t revolve around Task<T>.

What do we have to represent an asynchronous computation? Why, IObservable<T> of course. It will present the result at some future point in time. Ideally, I suppose you would deal with the count (or maximum line length, or whatever) by reacting to it asynchronously too. If necessary though, you can always just take the value produced and stash it somewhere… which is exactly what an AsyncSubject does, as mentioned above. You can get the value from that by just calling First() on it, which will block if the value hasn’t been seen yet – and you don’t need to worry about "missing" it, because of the caching within the subject.

When I started this blog post, I didn’t understand Prune, but I’ve found that writing about the whole process has made it somewhat clearer to me. Calling Prune on an observable returns an AsyncSubject – but which also unsubscribes itself from the original observable when the subject is disposed, basically allowing a more orderly cleanup. So, all we need to do is call Prune on the result of our asynchronous aggregation, and we’re away.

That’s one part of the "non-Rx" framework removed… what else can we take out of the code from the previous blog post? Well, if you look at the FutureAggregate method I posted, it does two things: maintains a running aggregate, and publishes the last result (via a Task<T>). Now the "maintain a running aggregate" looks remarkably like Scan, doesn’t it? All the future aggregates (FutureCount etc) can be built from one new building block: an observable which subscribes to an existing one, and yields the last value it sees before completion.

I’ll check with Wes whether he’s happy for me for me to share his code – if so, I’ll put that and the original code into a zip file so it’s easy to compare the dull version with the shiny one.

Conclusion

It’s not enough to be able to think about Rx. To really appreciate it, you’ve got to be able to think in Rx. As I’d written a sort of "mini-Rx" before, I was arrogant enough to assume I already knew how to think in observable sequences… but apparently not. (To be fair to myself, it’s been a while and Push LINQ didn’t try to do anything genuinely asynchronous.)

I’m certainly not "in the zone" yet when it comes to Rx… but I think I can see it in the distance now. I’m heartily glad I raised my concerns over asynchronous aggregation – partly as encouragement to the team to consider including them in the framework, but mostly because it’s helped me appreciate the framework a lot better. With any luck, these two somewhat "stream of consciousness" posts may have helped you too.

Now to go over what I wrote last night for the book, and see how much of it was rubbish :)

First encounters with Reactive Extensions

I’ve been researching Reactive Extensions for the last few days, with an eye to writing a short section in chapter 12 of the second edition of C# in Depth. (This is the most radically changed chapter from the first edition; it will be covering LINQ to SQL, IQueryable, LINQ to XML, Parallel LINQ, Reactive Extensions, and writing your own LINQ to Objects operators.) I’ve watched various videos from Channel 9, but today was the first time I actually played with it. I’m half excited, and half disappointed.

My excited half sees that there’s an awful lot to experiment with, and loads to learn about join patterns etc. I’m also looking forward to trying genuine events (mouse movements etc) – so far my tests have been to do with collections.

My disappointed half thinks it’s missing something. You see, Reactive Extensions shares some concepts with my own Push LINQ library… except it’s had smarter people (no offense meant to Marc Gravell) working harder on it for longer. I’d expect it to be easier to use, and make it a breeze to do anything you could do in Push LINQ. Unfortunately, that’s not quite the case.

Subscription model

First, the way that subscription is handled for collections seems slightly odd. I’ve been imagining two kinds of observable sources:

  • Genuine "event streams" which occur somewhat naturally – for instance, mouse movement events. Subscribing to such an observable wouldn’t do anything to it other than adding subscribers.
  • Collections (and the like) where the usual use case is "set up the data pipeline, then tell it to go". In that case calling Subscribe should just add the relevant observers, but not actually "start" the sequence – after all, you may want to add more observers (we’ll see an example of this in a minute).

In the latter case, I could imagine an extension method to IEnumerable<T> called ToObservable which would return a StartableObservable<T> or something like that – you’d subscribe what you want, and then call Start on the StartableObservable<T>. That’s not what appears to happen though – if you call ToObservable(), you get an implementation which iterates over the source sequence as soon as anything subscribes to it – which just doesn’t feel right to me. Admittedly it makes life easy in the case where that’s really all you want to do, but it’s a pain otherwise.

There’s a way of working round this in Reactive Extensions: there’s Subject<T> which is both an observer and an observable. You can create a Subject<T>, Subscribe all the observers you want (so as to set up the data pipeline) and then subscribe the subject to the real data source. It’s not exactly hard, but it took me a while to work out, and it feels a little unwieldy. The next issue was somewhat more problematic.

Blocking aggregation

When I first started thinking about Push LINQ, it was motivated by a scenario from the C# newsgroup: someone wanted to group a collection in a particular way, and then count how many items were in each group. This is effectively the "favourite colour voting" scenario outlined in the link at the top of this post. The problem to understand is that the normal Count() call is blocking: it fetches items from a collection until there aren’t any more; it’s in control of the execution flow, effectively. That means if you call it in a grouping construct, the whole group has to be available before you call Count(). So, you can’t stream an enormous data set, which is unfortunate.

In Push LINQ, I addressed this by making Count() return Future<int> instead of int. The whole query is evaluated, and then you can ask each future for its actual result. Unfortunately, that isn’t the approach that the Reactive Framework has taken – it still returns int from Count(). I don’t know the reason for this, but fortunately it’s somewhat fixable. We can’t change Observable of course, but we can add our own future-based extensions:

public static class ObservableEx
{
    public static Task<TResult> FutureAggregate<TSource, TResult>
        (this IObservable<TSource> source,
        TResult seed, Func<TResult, TSource, TResult> aggregation)
    {
        TaskCompletionSource<TResult> result = new TaskCompletionSource<TResult>();
        TResult current = seed;
        source.Subscribe(value => current = aggregation(current, value),
            error => result.SetException(error),
            () => result.SetResult(current));
        return result.Task;
    }

    public static Task<int> FutureMax(this IObservable<int> source)
    {
        // TODO: Make this generic and throw exception on
        // empty sequence. Left as an exercise for the reader.
        return source.FutureAggregate(int.MinValue, Math.Max);
    }

    public static Task<int> FutureMin(this IObservable<int> source)
    {
        // TODO: Make this generic and throw exception on
        // empty sequence. Left as an exercise for the reader.
        return source.FutureAggregate(int.MaxValue, Math.Min);
    }

    public static Task<int> FutureCount<T>(this IObservable<T> source)
    {
        return source.FutureAggregate(0, (count, _) => count + 1);
    }
}

This uses Task<T> from Parallel Extensions, which gives us an interesting ability, as we’ll see in a moment. It’s all fairly straightforward – TaskCompletionSource<T> makes it very easy to specify a value when we’ve finished, or indicate that an error occurred. As mentioned in the comments, the maximum/minimum implementations leave something to be desired, but it’s good enough for a blog post :)

Using the non-blocking aggregation operators

Now that we’ve got our extension methods, how can we use them? First I decided to do a demo which would count the number of lines in a file, and find the maximum and minimum line lengths:

public static List<T> ToList<T>(this IObservable<T> source)
{
    List<T> ret = new List<T>();
    source.Subscribe(x => ret.Add(x));
    return ret;
}
private static IEnumerable<string> ReadLines(string filename)
{
    using (TextReader reader = File.OpenText(filename))
    {
        string line;
        while ((line = reader.ReadLine()) != null)
        {
            yield return line;
        }
    }
}

var subject = new Subject<string>();
var lengths = subject.Select(line => line.Length);
var min = lengths.FutureMin();
var max = lengths.FutureMax();
var count = lengths.FutureCount();
            
var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);
Console.WriteLine("Count: {0}, Min: {1}, Max: {2}",
                  count.Result, min.Result, max.Result);

As you can see, we use the Result property of a task to find its eventual result – this call will block until the result is ready, however, so you do need to be careful about how you use it. Each line is only read from the file once, and pushed to all three observers, who carry their state around until the sequence is complete, whereupon they publish the result to the task.

I got this working fairly quickly – then went back to the "grouping lines by line length" problem I’d originally set myself. I want to group the lines of a file by their length (all lines of length 0, all lines of length 1 etc) and count each group. The result is effectively a histogram of line lengths. Constructing the query itself wasn’t a problem – but iterating through the results was. Fundamentally, I don’t understand the details of ToEnumerable yet, particularly the timing. I need to look into it more deeply, but I’ve got two alternative solutions for the moment.

The first is to implement my own ToList extension method. This simply creates a list and subscribes an observer which adds items to the list as it goes. There’s no attempt at "safety" here – if you access the list before the source sequence has completed, you’ll see whatever has been added so far. I am still just experimenting :) Here’s the implementation:

public static List<T> ToList<T>(this IObservable<T> source)
{
    List<T> ret = new List<T>();
    source.Subscribe(x => ret.Add(x));
    return ret;
}

Now we can construct a query expression, project each group using our future count, make sure we’ve finished pushing the source before we read the results, and everything is fine:

var subject = new Subject<string>();
var groups = from line in subject
             group line.Length by line.Length into grouped
             select new { Length = grouped.Key, Count = grouped.FutureCount() };
var results = groups.ToList();

var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);
foreach (var group in results)
{
    Console.WriteLine("Length: {0}; Count: {1}", group.Length, group.Count.Result);
}

Note how the call to ToList is required before calling source.ToObservable(...).Subscribe – otherwise everything would have been pushed before we started collecting it.

All well and good… but there’s another way of doing it too. We’ve only got a single task being produced for each group – instead of waiting until everything’s finished before we dump the results to the console, we can use Task.ContinueWith to write it (the individual group result) out as soon as that group has been told that it’s finished. We force this extra action to occur on the same thread as the observer just to make things easier in a console app… but it all works very neatly:

var subject = new Subject<string>();
var groups = from line in subject
             group line.Length by line.Length into grouped
             select new { Length = grouped.Key, Count = grouped.FutureCount() };
                                    
groups.Subscribe(group =>
{
    group.Count.ContinueWith(
         x => Console.WriteLine("Length: {0}; Count: {1}"
                                group.Length, x.Result),
         TaskContinuationOptions.ExecuteSynchronously);
});
var source = ReadLines("../../Program.cs");
source.ToObservable(Scheduler.Now).Subscribe(subject);

Conclusion

That’s the lot, so far. It feels like I’m sort of in the spirit of Reactive Extensions, but that maybe I’m pushing it (no pun intended) in a direction which Erik and Wes either didn’t anticipate, or at least don’t view as particularly valuable/elegant. I very much doubt that they didn’t consider deferred aggregates – it’s much more likely that either I’ve missed some easy way of doing this, or there are good reasons why it’s a bad idea. I hope to find out which at some point… but in the meantime, I really ought to work out a more idiomatic example for C# in Depth.

“Magic” null argument testing

Warning: here be dragons. I don’t think this is the right way to check for null arguments, but it was an intriguing idea.

Today on Stack Overflow, I answered a question about checking null arguments. The questioner was already using an extension similar to my own one in MiscUtil, allowing code like this:

public void DoSomething(string name)
{
    name.ThrowIfNull("name");

    // Normal code here
}

That’s all very well, but it’s annoying to have to repeat the name part. Now in an ideal world, I’d say it would be nice to add an attribute to the parameter and have the check performed automatically (and when PostSharp works with .NET 4.0, I’m going to give that a go, mixing Code Contracts and AOP…) – but for the moment, how far can we go with extension methods?

I stand by my answer from that question – the code above is the simplest way to achieve the goal for the moment… but another answer raised the interesting prospect of combining anonymous types, extension methods, generics, reflection and manually-created expression trees. Now that’s a recipe for hideous code… but it actually works.

The idea is to allow code like this:

public void DoSomething(string name, string canBeNull, int foo, Stream input)
{
    new { name, input }.CheckNotNull();

    // Normal code here
}

That should check name and input, in that order, and throw an appropriate ArgumentNullException – including parameter name – if one of them is null. It uses the fact that projection initializers in anonymous types use the primary expression’s name as the property name in the generated type, and the value of that expression ends up in the instance. Therefore, given an instance of the anonymous type initializer like the above, we have both the name and value despite having only typed it in once.

Now obviously this could be done with normal reflection – but that we be slow as heck. No, we want to effectively find the properties once, and generate strongly typed delegates to perform the property access. That sounds like a job for Delegate.CreateDelegate, but it’s not quite that simple… to create the delegate, we’d need to know (at compile time) what the property type is. We could do that with another generic type, but we can do better than that. All we really need to know about the value is whether or not it’s null. So given a "container" type T, we’d like a bunch of delegates, one for each property, returning whether that property is null for a specified instance – i.e. a Func<T, bool>. And how do we build delegates at execution time with custom logic? We use expression trees…

I’ve now implemented this, along with a brief set of unit tests. The irony is that the tests took longer than the implementation (which isn’t very unusual) – and so did writing it up in this blog post. I’m not saying that it couldn’t be improved (and indeed in .NET 4.0 I could probably make the delegate throw the relevant exception itself) but it works! I haven’t benchmarked it, but I’d expect it to be nearly as fast as manual tests – insignificant in methods that do real work. (The same wouldn’t be true using reflection every time, of course.)

The full project including test cases is now available, but here’s the (almost completely uncommented) "production" code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Linq.Expressions;

public static class Extensions
{
    public static void CheckNotNull<T>(this T container) where T : class
    {
        if (container == null)
        {
            throw new ArgumentNullException("container");
        }
        NullChecker<T>.Check(container);
    }

    private static class NullChecker<T> where T : class
    {
        private static readonly List<Func<T, bool>> checkers;
        private static readonly List<string> names;

        static NullChecker()
        {
            checkers = new List<Func<T, bool>>();
            names = new List<string>();
            // We can’t rely on the order of the properties, but we
            // can rely on the order of the constructor parameters
            // in an anonymous type – and that there’ll only be
            // one constructor.
            foreach (string name in typeof(T).GetConstructors()[0]
                                             .GetParameters()
                                             .Select(p => p.Name))
            {
                names.Add(name);
                PropertyInfo property = typeof(T).GetProperty(name);
                // I’ve omitted a lot of error checking, but here’s
                // at least one bit…
                if (property.PropertyType.IsValueType)
                {
                    throw new ArgumentException
                        ("Property " + property + " is a value type");
                }
                ParameterExpression param = Expression.Parameter(typeof(T), "container");
                Expression propertyAccess = Expression.Property(param, property);
                Expression nullValue = Expression.Constant(null, property.PropertyType);
                Expression equality = Expression.Equal(propertyAccess, nullValue);
                var lambda = Expression.Lambda<Func<T, bool>>(equality, param);
                checkers.Add(lambda.Compile());
            }
        }

        internal static void Check(T item)
        {
            for (int i = 0; i < checkers.Count; i++)
            {
                if (checkers[i](item))
                {
                    throw new ArgumentNullException(names[i]);
                }
            }
        }
    }
}

Oh, and just as a miracle – the expression tree worked first time. I’m no Marc Gravell, but I’m clearly improving :)

Update: Marc Gravell pointed out that the order of the results of Type.GetProperties isn’t guaranteed – something I should have remembered myself. However, the order of the constructor parameters will be the same as in the anonymous type initialization expression, so I’ve updated the code above to reflect that. Marc also showed how it could almost all be put into a single expression tree which returns either null (for no error) or the name of the "failing" parameter. Very clever :)

Custom value types are like buses

You wait years to write one… and then six of them come along at once.

(Cross-posted to the Noda Time blog and my coding blog as it’s relevant to both.)

When we started converting Joda Time to .NET, there was always going to be the possibility of using custom value types (structs) – an opportunity which isn’t available in Java. This has meant reducing the type hierarchy a fair amount, but that’s actually made things simpler. However, I didn’t realise quite how many we’d end up with – or how many would basically just wrap a long.

So far, we have 4 value types whose only field is a long. They are:

  • Instant: an instant on the theoretical timeline, which doesn’t know about days of the week, time zones etc. It has a single reference point – the Unix epoch – but that’s only so that we can represent it in a concrete fashion. The long represents the number of ticks since the Unix epoch.
  • LocalInstant: this is a tricky one to explain, and I’m still looking for the right way of doing so. The basic idea is that it represents a day and a time within that day, but without reference to a time zone or calendar system. So if I’m talking to someone in a different time zone and an Islamic calendar, we can agree on the idea of "3pm tomorrow" even if we have different ideas of what month that’s in and when it starts. A LocalInstant is effectively the instant at which that date/time would occur if you were considering it in UTC… but importantly it’s not a genuine instant, in that it doesn’t unambiguously represent a point in time.
  • Duration: a number of ticks, which can be added to an instant to get another instant. This is a pure number of ticks – again, it doesn’t really know anything about days, months etc (although you can find the duration for the number of ticks in a standard day – that’s not the same as adding one day to a date and time within a time zone though, due to daylight savings).
  • Offset: very much like a duration, but only used to represent the offset due to a time zone. This is possibly the most unusual of the value types in Noda, because of the operators using it. You can add an offset to an instant to get a local instant, or you can subtract an offset from a local instant to get an instant… but you can’t do those things the other way round.

The part about the odd nature of the operators using Offset really gets to the heart of what I like about Joda Time and what makes Noda Time even better. You see, Joda Time already has a lot of types for different concepts, where .NET only has DateTime/DateTimeOffset/TimeSpan – having these different types and limiting what you can do with them helps to lead developers into the pit of success; the type system helps you naturally get things right.

However, the Joda Time API uses long internally to represent all of these, presumably for the sake of performance: Java doesn’t have custom value types, so you’d have to create a whole new object every time you manipulated anything. This could be quite significant in some cases. Using the types above has made the code a lot simpler and more obviously correct – except for a couple of cases where the code I’ve been porting appears to do some very odd things, which I’ve only noticed are odd because of the type system. James Keesey, who’s been porting the time zone compiler, has had similar experiences: since introducing the offset type with its asymmetric operators, found that he had a bug in some of his ported code – which immediately caused a compile-time error when he’d converted to using offsets.

When I first saw the C# spec, I was dubious about the value of user-defined value types and operator overloading. Indeed I still suspect that both features are overused… but when they’re used appropriately, they’re beautiful.

Noda Time is still a long way from being a useful API, but I’m extremely pleased with how it’s shaping up.

Where do you benefit from dynamic typing?

Disclaimer: I don’t want this to become a flame war in the comments. I’m coming from a position of ignorance, and well aware of it. While I’d like this post to provoke thought, it’s not meant to be provocative in the common use of the term.

Chapter 14 of C# in Depth is about dynamic typing in C#. A couple of reviewers have justifiably said that I’m fairly keen on the mantra of "don’t use dynamic typing unless you need it" – and that possibly I’m doing dynamic typing a disservice by not pointing out more of its positive aspects. I completely agree, and I’d love to be more positive – but the problem is that I’m not (yet) convinced about why dynamic typing is something I would want to embrace.

Now I want to start off by making something clear: this is meant to be about dynamic typing. Often advocates for dynamically typed languages will mention:

  • REPL (read-eval-print-loop) abilities which allow for a very fast feedback loop while experimenting
  • Terseness – the lack of type names everywhere makes code shorter
  • Code evaluated at execution time (so config files can be scripts etc)

I don’t count any of these as benefits of dynamic typing per se. They’re benefits which often come alongside dynamic typing, but they’re not dependent on dynamic typing. The terseness argument is the one most closely tied to their dynamic nature, but various languages with powerful type inference show that being statically typed doesn’t mean having to specify type names everywhere. (C#’s var keyword is a very restricted form of type inference, compared with – say – that of F#.)

What I’m talking about is binding being performed at execution time and only at execution time. That allows for:

  • Duck typing
  • Dynamic reaction to previously undeclared messages
  • Other parts of dynamic typing I’m unaware of (how could there not be any?)

What I’m interested in is how often these are used within real world (rather than demonstration) code. It may well be that I’m suffering from Blub’s paradox – that I can’t see the valid uses of these features simply because I haven’t used them enough. Just to be clear, I’m not saying that I never encounter problems where I would welcome dynamic typing – but I don’t run into them every day, whereas I get help from the compiler every day.

Just as an indicator of how set in my statically typed ways I am, at the recent Stack Overflow DevDays event in London, Michael Sparks went through Peter Norvig’s spelling corrector. It’s a neat piece of code (and yes, I’ll finish that port some time) but I kept finding it hard to understand simply because the types weren’t spelled out. Terseness can certainly be beneficial, but in this case I would personally have found it simpler if the variable and method types had been explicitly declared.

So, for the dynamic typing fans (and I’m sure several readers will come into that category):

  • How often do you take advantage of dynamic typing in a way that really wouldn’t be feasible (or would be very clunky) in a statically typed language?
  • Is it usually the same single problem which crops up regularly, or do you find a wide variety of problems benefit from dynamic typing?
  • When you declare a variable (or first assign a value to a variable, if your language doesn’t use explicit declarations) how often do you really either not know its type or want to use some aspect of it which wouldn’t typically have been available in a statically typed environment?
  • What balance do you find in your use of duck typing (the same method/member/message has already been declared on multiple types, but there’s no common type or interface) vs truly dynamic reaction based on introspection of the message within code (e.g. building a query based on the name of the method, such as FindBooksByAuthor("Josh Bloch"))?
  • What aspects of dynamic typing do I appear to be completely unaware of?

Hopefully someone will be able to turn the light bulb on for me, so I can be more genuinely enthusiastic about dynamic typing, and perhaps even diversify from my comfort zone of C#…

Just how spiky is your traffic?

No, this isn’t the post about dynamic languages I promise. That will come soon. This is just a quick interlude. This afternoon, while answering a question on Stack Overflow1 about the difference between using an array and a Dictionary<string, string> (where each string was actually the string representation of an integer) I posted the usual spiel about preferring readable code to micro-optimisation. The response in a comment – about the performance aspect – was:

Well that’s not so easily said for a .com where performance on a site that receives about 1 million hits a month relies on every little ounce of efficiency gains you can give it.

A million hits a month, eh? That sounds quite impressive, until you actually break it down. Let’s take a month of 30 days – that has 30 * 24 * 60 * 60 = 2,592,000 seconds2. In other words, a million hits a month is less than one hit every two seconds. Not so impressive. At Google we tend to measure traffic in QPS (queries per second, even if they’re not really queries – the search terminology becomes pervasive) so this is around 0.39 QPS. Astonished that someone would make such a claim in favour of micro-optimisation at that traffic level, I tweeted about it. Several of the replies were along the lines of "yeah, but traffic’s not evenly distributed." That’s entirely true. Let’s see how high we can make the traffic without going absurd though.

Let’s suppose this is a site which is only relevant on weekdays – that cuts us down to 20 days in the month. Now let’s suppose it’s only relevant for one hour per day – it’s something people look at when they get to work, and most of the users are in one time zone. That’s a pretty massive way of spiking. We’ve gone down from 30 full days of traffic to 20 hours – or 20 * 60 * 60 = 72000 seconds, giving 14 QPS. Heck, let’s say the peak of the spike is double that – a whopping 28 QPS.

Three points about this:

  • 28 QPS is still not a huge amount of traffic.
  • If you’re really interested in handling peak traffic of ~28 QPS without latency becoming huge, it’s worth quoting that figure rather than "a million hits a month" because the latter is somewhat irrelevant, and causes us to make wild (and probably wildly inaccurate) guesses about your load distribution.
  • If you’re going to bring the phrase "a .com" into the picture, attempting to make it sound particularly important, you really shouldn’t be thinking about hosting your web site on one server – so the QPS gets diluted again.
  • Even at 28 QPS, the sort of difference that would be made here is tiny. A quick microbenchmark (with all the associated caveats) showed that on my laptop (hardly a server-class machine) I could build the dictionary and index into it 3 times 2.8 million times in about 5 seconds. If every request needed to do that 100 times, then the cost of doing it 28 requests per second on my laptop would still only be 0.5% of that second – not a really significant benefit, despite the hugely exaggerated estimates of how often we needed to do that.

There are various other ways in which it’s not a great piece of code, but the charge against premature optimization still stands. You don’t need to get every little ounce of efficiency out of your code. Chances are, if you start guessing at where you can get efficiency, you’re going to be wrong. Measure, measure, measure – profile, profile, profile. Once you’ve done all of that and proved that a change reducing clarity has a significant benefit, go for it – but until then, write the most readable code you can. Likewise work out your performance goals in a meaningful fashion before you worry too much – and hits per months isn’t a meaningful figure.

Performance is important – too important to be guessed about instead of measured.


1 I’m not linking to it because the Streisand effect would render this question more important than it really is. I’m sure you can find it if you really want to, but that’s not the point of the post.

2 Anyone who wants to nitpick and talk about months which are a bit longer or shorter than that due to daylight saving time changes (despite still being 30 days) can implement that logic for me in Noda Time.

Noda Time gets its own blog

I’ve decided it’s probably not a good idea to make general Noda Time posts on my personal blog. I’ll still post anything that’s particularly interesting in a "general coding" kind of way here, even if I discover it in Noda Time, but I thought it would be good for the project to have a blog of its very own, which other team members can post to.

I still have plenty of things I want to blog about here. Next up is likely to be a request for help: I want someone to tell me why I should love the "dynamic" bit of dynamic languages. Stay tuned for more details :)

Noda Time is born

There was an amazing response to yesterday’s post – not only did readers come up with plenty of names, but lots of people volunteered to help. As a result, I’m feeling under a certain amount of pressure for this project to actually take shape.

The final name chosen is Noda Time. We now have a Google Code Project and a Google Group (/mailing list). Now we just need some code…

I figured it would be worth explaining a bit more about my vision for the project. Obviously I’m only one contributor, and I’m expecting everyone to add there own views, but this can act as a starting point.

I want this project to be more than just a way of getting better date and time handling on .NET. I want it to be a shining example of how to build, maintain and deploy an open source .NET library. As some of you know, I have a few other open source projects on the go, and they have different levels of polish. Some have downloadable binaries, some don’t. They all have just-about-enough-to-get-started documentation, but not nearly enough, really. They have widely varying levels of test coverage. Some are easier to build than others, depending on what platform you’re using.

In some ways, I’m expecting the code to be the easy part of Noda Time. After all, the implementation is there already – we’ll have plenty of interesting design decisions to make in order to marry the concepts of Joda Time with the conventions of .NET, but that shouldn’t be too hard. Here are the trickier things, which need discussion, investigation and so forth:

  • What platforms do we support? Here’s my personal suggested list:
    • .NET 4.0
    • .NET 3.5
    • .NET 2.0SP1 (require the service pack for DateTimeOffset)
    • Mono (versions TBD)
    • Silverlight 2, 3 and 4
    • Compact Framework 2.0 and 3.5
  • What do we ship, and how do we handle different platforms? For example, can we somehow use Code Contracts to give developers a better experience on .NET 4.0 without making it really hard to build for other versions of .NET? Can we take advantage of the availability of TimeZoneInfo in .NET 3.5 and still build fairly easily for earlier versions? Do developers want debug or release binaries? Can we build against the client profile of .NET 3.5/4.0?
  • What should we use to build? I’ve previously used NAnt for the overall build process and MSBuild for the code building part. While this has worked quite well, I’m nervous of the dependency on NAnt-Contrib library for the <msbuild> task, and generally being dependent on a build project whose last release was a beta nearly two years ago. Are there better alternatives?
  • How should documentation be created and distributed?
    • Is Sandcastle the best way of building docs? How easy is it to get it running so that any developer can build the docs at any time? (I’ve previously tried a couple of times, and failed miserable.)
    • Would Monodoc be a better approach?
    • How should non-API documentation be handled? Is the wiki which comes with the Google Code project good enough? Do we need to somehow suck the wiki into an offline format for distribution with the binaries?
  • What do we need to do in order to work in low-trust environments, and how easily can we test that?
  • What do we do about signing? Ship with a "public" snk file which anyone can build with, but have a private version which the team uses to validate a "known good" release? Or just have the private key and use deferred signing?
  • While the library itself will support i18n for things like date/time formatting, do we need to apply it to "developer only" messages such as exceptions?
  • I’m used to testing with NUnit and Rhino.Mocks, but they’re not the last word in testing on .NET – what should we use, and why? What about coverage?
  • Do we need any dependencies (e.g. logging)? If so, how do we handle versioning of those dependencies? How are we affected by various licences?

These are all interesting topics, but they’re not really specific to Noda Time. Information about them is available all over the place, but that’s just the problem – it’s all over the place. I would like there to be some sort of documentation saying, "These are the decisions you need to think about, here are the options we chose for Noda Time, and this is why we did so." I don’t know what form that documentation will take yet, but I’m considering an ebook.

As you can tell, I’m aiming pretty high with this project – especially as I won’t even be using Google’s 20% time on it. However, there’s little urgency in it for me personally. I want to work out how to do things right rather than how to do them quickly. If it takes me a bit of time to document various decisions, and the code itself ships later, so be it… it’ll make the next project that much speedier.

I’m expecting a lot of discussion in the group, and no doubt some significant disagreements. I’m expecting to have to ask a bunch of questions on Stack Overflow, revealing just how ignorant I am on a lot of the topics above (and more). I think it’ll be worth it though. I think it’s worth setting a goal:

In one year, I want this to be a first-class project which is the natural choice for any developers wanting to do anything more than the simplest of date/time handling on .NET. In one year, I want to have a guide to developing open source class libraries on .NET which tells you everything you need to know other than how to write the code itself.

A year may seem like a long time, but I’m sure everyone who has expressed an interest in the project has significant other commitments – I know I do. Getting there in a year is going to be a stretch – but I’m expecting it to be a very enlightening journey.

What’s in a name (again)?

I have possibly foolishly decided to stop resisting the urge to port Joda Time to .NET. For those of you who are unaware, "use Joda Time" is almost always the best answer to any question involving "how do I achieve X with java.util.Date/Calendar?" It’s a Java library for handling dates and times, and it rocks. There is a plan to include a somewhat redesigned version in some future edition of Java (JSR-310) but it’s uncertain whether this will ever happen.

Now, .NET only gained the ability to work with time zones other than UTC and the local time zone (using only managed code) – it has a bit of catching up to do. It’s generally easier to work with the .NET BCL than the Java built-in libraries, but it’s still not a brilliant position to be in. I think .NET deserves good date/time support, and as no-one else appears to be porting Joda Time, I’m going to do it. (A few people have already volunteered to help. I don’t know how easily we’ll be able to divvy up the work, but we’ll see. I suspect the core may need to be done first, and then people can jump in to implement different chronologies etc. As a side-effect, I may try to use this project as a sort of case in terms of porting, managing an open source project, and properly implementing a .NET library with useful versioning etc.)

The first two problems, however, are to do with naming. First, the project name. Contenders include:

  • Joda Time.NET (sounds like it would be an absolutely direct port; while I intend to port all the tricky bits directly, it’s going to be an idiomatic port with appropriate .NET bits. It’s also a bit of a mouthful.)
  • Noda Time (as suggested in the comments and in email)
  • TonyTime (after Tony the Pony)
  • CoffeeTime
  • TeaTime
  • A progression of BreakfastTime, CoffeeTime, LunchTime, TeaTime, DinnerTime and SupperTime for different versions (not a serious contender)
  • ParsleySageRosemaryAndThyme (not a serious contender)
  • A few other silly ones too

I suspect I’m going to go for CoffeeTime, but we’ll see.

The second problem is going to prove more awkward. I want to mostly copy the names given in Joda Time – aside from anything else, it’ll make it familiar to anyone who uses Joda Time in Java (such as me). Now one of the most commonly used classes in Joda is "DateTime". Using that name in my port would be a Bad Idea. Shadowing a name in the System namespace is likely to lead to very disgruntled users who may prove hard to regruntle before they abandon the library.

So what do I do? Go for the subtly different DateAndTime? Tie it to the library with CoffeeDateTime? Change it to Instant? (It’ll derive from AbstractInstant anyway – assuming I keep the same hierarchy instead of moving to a composition model and value types.)

Obviously this is a decision which the "team" can make, when we’ve got one… but it feels like a decision which is lurking round the corner in a hostile way.

What I find interesting is that these are two very different naming problems: one is trying to name something in a relatively arbitrary way – I know I want something reasonably short and memorable for the overall name, but beyond that it doesn’t matter too much. The other is trying to nail a very specific name which really has to convey its meaning clearly… but where the obvious name is already taken. Also interestingly, neither is a particularly good example of my most common issue with naming: attempting to come up with a two or three word noun for something that actually needs a whole sentence to describe it adequately.

Oh well – we’ll see what happens. In another blog post I’ll suggest some of the goals I have in terms of what I’m hoping to learn from the project, and how I’d like it to progress. In other words, expect a work of complete fiction…

If you’re interested in helping out with the project, please mail me directly (rather than adding comments here) and as soon as I’ve set the project up, I’ll invite you to the mailing list.

UPDATE: I’ve already got a few interested names, which is great. Rather than be dictatorial about this, I’ll put it to a vote of the people who are willing to help out on it.

Revisiting randomness

Almost every Stack Overflow question which includes the words "random" and "repeated" has the same basic answer. It’s one of the most common "gotchas" in .NET, Java, and no doubt other platforms: creating a new random number generator without specifying a seed will depend on the current instant of time. The current time as measured by the computer doesn’t change very often compared with how often you can create and use a random number generator – so code which repeatedly creates a new instance of Random and uses it once will end up showing a lot of repetition.

One common solution is to use a static field to store a single instance of Random and reuse it. That’s okay in Java (where Random is thread-safe) but it’s not so good in .NET – if you use the same instance repeatedly from .NET, you can corrupt the internal data structures.

A long time ago, I created a StaticRandom class in MiscUtil – essentially, it was just a bunch of static methods (to mirror the instance methods found in Random) wrapping a single instance of Random and locking appropriately. This allows you to just call StaticRandom.Next(1, 7) to roll a die, for example. However, it has a couple of problems:

  • It doesn’t scale well in a multi-threaded environment. When I originally wrote it, I benchmarked an alternative approach using [ThreadStatic] and at the time, locking won (at least on my computer, which may well have only had a single core).
  • It doesn’t provide any way of getting at an instance of Random, other than by using new Random(StaticRandom.Next()).

The latter point is mostly a problem because it encourages a style of coding where you just use StaticRandom.Next(…) any time you want a random number. This is undoubtedly convenient in some situations, but it goes against the idea of treating a source of randomness as a service or dependency. It makes it harder to get repeatability and to see what needs that dependency.

I could have just added a method generating a new instance into the existing class, but I decided to play with a different approach – going back to per-thread instances, but this time using the ThreadLocal<T> class introduced in .NET 4.0. Here’s the resulting code:

using System;
using System.Threading;

namespace RandomDemo
{
    /// <summary>
    /// Convenience class for dealing with randomness.
    /// </summary>
    public static class ThreadLocalRandom
    {
        /// <summary>
        /// Random number generator used to generate seeds,
        /// which are then used to create new random number
        /// generators on a per-thread basis.
        /// </summary>
        private static readonly Random globalRandom = new Random();
        private static readonly object globalLock = new object();

        /// <summary>
        /// Random number generator
        /// </summary>
        private static readonly ThreadLocal<Random> threadRandom = new ThreadLocal<Random>(NewRandom);

        /// <summary>
        /// Creates a new instance of Random. The seed is derived
        /// from a global (static) instance of Random, rather
        /// than time.
        /// </summary>
        public static Random NewRandom()
        {
            lock (globalLock)
            {
                return new Random(globalRandom.Next());
            }
        }

        /// <summary>
        /// Returns an instance of Random which can be used freely
        /// within the current thread.
        /// </summary>
        public static Random Instance { get { return threadRandom.Value; } }

        /// <summary>See <see cref="Random.Next()" /></summary>
        public static int Next()
        {
            return Instance.Next();
        }

        /// <summary>See <see cref="Random.Next(int)" /></summary>
        public static int Next(int maxValue)
        {
            return Instance.Next(maxValue);
        }

        /// <summary>See <see cref="Random.Next(int, int)" /></summary>
        public static int Next(int minValue, int maxValue)
        {
            return Instance.Next(minValue, maxValue);
        }

        /// <summary>See <see cref="Random.NextDouble()" /></summary>
        public static double NextDouble()
        {
            return Instance.NextDouble();
        }

        /// <summary>See <see cref="Random.NextBytes(byte[])" /></summary>
        public static void NextBytes(byte[] buffer)
        {
            Instance.NextBytes(buffer);
        }
    }
}

The user can still call the static Next(…) methods if they want, but they can also get at the thread-local instance of Random by calling ThreadLocalRandom.Instance – or easily create a new instance with ThreadLocalRandom.NewRandom(). (The fact that NewRandom uses the global instance rather than the thread-local one is an implementation detail really; it happens to be convenient from the point of view of the ThreadLocal<T> constructor. It wouldn’t be terribly hard to change this.)

Now it’s easy to write a method which needs randomness (e.g. to shuffle a deck of cards) and give it a Random parameter, then call it using the thread-local instance:

public void Shuffle(Random rng)
{
    …
}

deck.Shuffle(ThreadLocalRandom.Instance);

The Shuffle method is then easier to test and debug, and expresses its dependency explicitly.

Performance

I tested the "old" and "new" implementations in a very simple way – for varying numbers of threads, I called Next() a fixed number of times (from each thread) and timed how long it took for all the threads to finish. I’ve also tried a .NET-3.5-compatible version using ThreadStatic instead of ThreadLocal<T>, and running the original code and the ThreadStatic version on .NET 3.5 as well.

Threads StaticRandom (4.0b2) ThreadLocalRandom (4.0b2) ThreadStaticRandom (4.0b2) StaticRandom(3.5) ThreadStaticRandom (3.5)
1 11582 6016 10150 10373 16049
2 24667 7214 9043 25062 17257
3 38095 10295 14771 36827 25625
4 49402 13435 19116 47882 34415

A few things to take away from this:

  • The numbers were slightly erratic; somehow it was quicker to do twice the work with ThreadStaticRandom on .NET 4.0b2! This isn’t the perfect benchmarking machine; we’re interested in trends rather than absolute figures.
  • Locking hasn’t changed much in performance between framework versions
  • ThreadStatic has improved massively between .NET 3.5 and 4.0
  • Even on 3.5, ThreadStatic wins over a global lock as soon as there’s contention

The only downside to the ThreadLocal<T> version is that it requires .NET 4.0 – but the ThreadStatic version works pretty well too.

It’s worth bearing in mind that of course this is testing the highly unusual situation where there’s a lot of contention in the global lock version. The performance difference in the single-threaded version where the lock is always uncontended is still present, but very small.

Update

After reading the comments and thinking further, I would indeed get rid of the static methods elsewhere. Also, for the purposes of dependency injection, I agree that it’s a good idea to have a factory interface where that’s not overkill. The factory implementation could use either the ThreadLocal or ThreadStatic implementations, or effectively use the global lock version (by having its own instance of Random and a lock). In many cases I’d regard that as overkill, however.

One other interesting option would be to create a thread-safe instance of Random to start with, which delegated to thread-local "normal" implementations. That would be very useful from a DI standpoint.