Category Archives: Noda Time

Optimization and generics, part 2: lambda expressions and reference types

As with almost any performance work, your mileage may vary (in particular the 64-bit JIT may work differently) and you almost certainly shouldn’t care. Relatively few people write production code which is worth micro-optimizing. Please don’t take this post as an invitation to make code more complicated for the sake of irrelevant and possibly mythical performance changes.

It took me a surprisingly long time to find the problem described in the previous blog post, and almost no time at all to fix it. I understood why it was happening. This next problem took a while to identify at all, but even when I’d found a workaround I had no idea why it worked. Furthermore, I couldn’t reproduce it in a test case… because I was looking for the wrong set of triggers. I’ve now found at least some of the problem though.

This time the situation in Noda Time is harder to describe, although it concerns the same area of code. In various places I need to create new delegates containing parsing steps and add them to the list of steps required for a full parse. I can always use lambda expressions, but in many cases I’ve got the same logic repeatedly… so I decided to pull it out into a method. Bang – suddenly the code runs far slower. (In reality, I’d performed this refactoring first, and "unrefactored" it to speed things up.)

I think the problem comes down to method group conversions with generic methods and a type argument which is a reference type. The CLR isn’t very good at them, and the C# compiler uses them more than it needs to.

Show me the benchmark!

The complete benchmark code is available of course, but fundamentally I’m doing the same thing in each test case: creating a delegate of type Action which does nothing, and then checking that the delegate reference is non-null (just to avoid the JIT optimizing it away). In each case this is done in a generic method with a single type parameter. I call each method in two ways: once with int as the type argument, and once with string as the type argument. Here are the different cases involved:

  • Use a lambda expression: Action foo = () => {};
  • Fake what I expected the compiler to do: keep a separate generic cache class with a static variable for the delegate; populate the cache once if necessary, and thereafter use the cache field
  • Fake what the compiler is actually doing with the lambda expression: write a separate generic method and perform a method group conversion to it
  • Do what the compiler could do: write a separate non-generic method and perform a method group conversion to it
  • Use a method group conversion to a static (non-generic) method on a generic type
  • Use a method group conversion to an instance (non-generic) method on a generic type, via a generic cache class with a single field in referring to an instance of the generic class

(Yes, the last one is a bit convoluted – but the line in the method itself is simple: Action foo = ClassHolder<T>.SampleInstance.NoOpInstance;

Remember, we’re doing each of these in a generic method, and calling that generic method using a type argument of either int or string. (I’ve run a few tests, and the exact type isn’t important – all that matters is that int is a value type, and string is a reference type.)

Importantly, we’re not capturing any variables, and the type parameter is not involved in either the delegate type or any part of the implementation body.

Benchmark results

Again, times are in milliseconds – but this time I didn’t want to run it for 100 million iterations, as the "slow" versions would have taken far too long. I’ve run this on the x64 JIT as well and seen the same effect, but I haven’t included the figures here.

Times in milliseconds for 10 million iterations

Test TestCase<int> TestCase<string>
Lambda expression 180 29684
Generic cache class 90 288
Generic method group conversion 184 30017
Non-generic method group conversion 178 189
Static method on generic type 180 29276
Instance method on generic type 202 299

Yes, it’s about 150 times slower to create a delegate from a generic method with a reference type as the type argument than with a value type… and yet this is the first I’ve heard of this. (I wouldn’t be surprised if there were a post from the CLR team about it somewhere, but I don’t think it’s common knowledge by any means.)

Conclusion

One of the tricky things is that it’s hard to know exactly what the C# compiler is going to do with any given lambda expression. In fact, the method which was causing me grief earlier on isn’t generic, but it’s in a generic type and captures some variables which use the type parameters – so perhaps that’s causing a generic method group conversion somewhere along the way.

Noda Time is a relatively extreme case, but if you’re using delegates in any performance-critical spots, you should really be aware of this issue. I’m going to ping Microsoft (first informally, and then via a Connect report if that would be deemed useful) to see if there’s an awareness of this internally as potential "gotcha", and whether there’s anything that can be done. Normal trade-offs of work required vs benefit apply, of course. It’s possible that this really is an edge case… but with lambdas flying everywhere these days, I’m not sure that it is.

Maybe tomorrow I’ll actually be able to finish getting Noda Time moved onto the new system… all of this performance work has been a fun if surprising distraction from the main job of shipping working code…

Optimization and generics, part 1: the new() constraint (updated: now with CLR v2 results)

As with almost any performance work, your mileage may vary (in particular the 64-bit JIT may work differently) and you almost certainly shouldn’t care. Relatively few people write production code which is worth micro-optimizing. Please don’t take this post as an invitation to make code more complicated for the sake of irrelevant and possibly mythical performance changes.

I’ve been doing quite a bit of work on Noda Time recently – and have started getting my head round all the work that James Keesey has put into the parsing/formatting. I’ve been reworking it so that we can do everything without throwing any exceptions, and also to work on the idea of parsing a pattern once and building a sequence of actions for both formatting and parsing from the action. To format or parse a value, we then just need to apply the actions in turn. Simples.

Given that this is all in the name of performance (and I consider Noda Time to be worth optimizing pretty hard) I was pretty cross when I ran a complete revamp through the little benchmarking tool we use, and found that my rework had made everything much slower. Even parsing a value after parsing the pattern was slower than parsing both the value and the pattern together. Something was clearly very wrong.

In fact, it turns out that at least two things were very wrong. The first (the subject of this post) was easy to fix and actually made the code a little more flexible. The second (the subject of the next post, which may be tomorrow) is going to be harder to work around.

The new() constraint

In my SteppedPattern type, I have a generic type parameter – TBucket. It’s already constrained in terms of another type parameter, but that’s irrelevant as far as I’m aware. (After today though, I’m taking very little for granted…) The important thing is that before I try to parse a value, I want to create a new bucket. The idea is that bits of information end up in the bucket as they’re being parsed, and at the very end we put everything together. So each parse operation requires a new bucket. How can we create one in a nice generic way?

Well, we can just call its public parameterless constructor. I don’t mind the types involved having such a constructor, so all we need to do is add the new() constraint, and then we can call new TBucket():

// Somewhat simplified…
internal sealed class SteppedPattern<TBucket> : IParsePattern<TBucket>
    where TBucket : new()
{
    public ParseResult Parse(string value)
    {
        TBucket bucket = new TBucket();

        // Rest of parsing goes here
    }
}

Great! Nice and simple. Unfortunately, it turned out that that one line of code was taking 75% of the time to parse a value. Just creating an empty bucket – pretty much the simplest bit of parsing. I was amazed when I discovered that.

Fixing it with a provider

The fix is reasonably easy. We just need to tell the type how to create an instance, and we can do that with a delegate:

// Somewhat simplified…
internal sealed class SteppedPattern<TBucket> : IParsePattern<TBucket>
{
    private readonly Func bucketProvider;

    internal SteppedPattern(Func bucketProvider)
    {
        this.bucketProvider = bucketProvider;
    }

    public ParseResult Parse(string value)
    {
        TBucket bucket = bucketProvider();

        // Rest of parsing goes here
    }
}

Now I can just call new SteppedPattern(() => new OffsetBucket()) or whatever. This also means I can keep the constructor internal, not that I care much. I could even reuse old parse buckets if that wouldn’t be a semantic problem – in other cases it could be useful. Hooray for lambda expressions – until we get to the next post, anyway.

Show me the figures!

You don’t want to have to run Noda Time’s benchmarks to see the results for yourself, so I wrote a small benchmark to time just the construction of a generic type. As a measure of how insignificant this would be for most apps, these figures are in milliseconds, performing 100 million iterations of the action in question. Unless you’re going to do this in performance-critical code, you just shouldn’t care.

Anyway, the benchmark has four custom types: two classes, and two structs – a small and a large version of each. The small version has a single int field; the large version has eight long fields. For each type, I benchmarked both approaches to initialization.

The results on two machines (32-bit and 64-bit) are below, for both the v2 CLR and v4. The 64-bit machine is much faster in general – you should only compare results within one machine, as it were.)

CLR v4: 32-bit results (ms per 100 million iterations)

Test type new() constraint Provider delegate
Small struct 689 1225
Large struct 11188 7273
Small class 16307 1690
Large class 17471 3017

CLR v4: 64-bit results (ms per 100 million iterations)

Test type new() constraint Provider delegate
Small struct 473 868
Large struct 2670 2396
Small class 8366 1189
Large class 8805 1529

CLR v2: 32-bit results (ms per 100 million iterations)

Test type new() constraint Provider delegate
Small struct 703 1246
Large struct 11411 7392
Small class 143967 1791
Large class 143107 2581

CLR v2: 64-bit results (ms per 100 million iterations)

Test type new() constraint Provider delegate
Small struct 510 686
Large struct 2334 1731
Small class 81801 1539
Large class 83293 1896

(An earlier version of this post had a mistake – my original tests used structs for everything, despite the names.)

Others have reported slightly different results, including the new() constraint being better for both large and small structs.

Just in case you hadn’t spotted them, look at the results for classes. Those are the real results – it took over 2 minutes to run the test using the new() constraint on my 32-bit laptop, compared with under two seconds for the provider. Yikes. This was actually the situation I was in for Noda Time, which is built on .NET 2.0 – it’s not surprising that so much of my benchmark’s time was spent constructing classes, given results like this.

Of course you can download the benchmark program for yourself and see how it performs on your machine. It’s a pretty cheap-and-cheerful benchmark, but when the differences are this big, minor sources of inaccuracy don’t bother me too much. The simplest way to run under CLR v2 is to compile with the .NET 3.5 C# compiler to start with.

What’s going on under the hood?

As far as I’m aware, there’s no IL to give support for the new() constraint, in terms of using the parameterless constructor. (The constraint itself can be expressed in IL though.) Instead, when we write new T() in C#, the compiler emits a call to Activator.CreateInstance. Apparently, that’s slower than calling a delegate – presumably due to trying to find an accessible constructor with reflection, and invoking it. I suspect it could be optimized relatively easily – e.g. by caching the results per type it’s called with, in terms of delegates. I’m slightly surprised this hasn’t (apparently) been optimized, given how easy it is to cache values by generic type. No doubt there’s a good reason lurking there somewhere, even if it’s only the memory taken up by the cache.

Either way, it’s easy to work around in general.

Conclusion

I wouldn’t have found this gotcha if I didn’t have before and after tests (or in this case, side-by-side tests of the old way and the new way of parsing). The real lesson of this post shouldn’t be about the new() constraint – it should be how important it is to test performance (assuming you care), and how easy it is to assume certain operations are cheap.

Next post: something much weirder.

The joys of date/time arithmetic

(Cross-posted to my main blog and the Noda Time blog, in the hope that the overall topic is still of interest to those who aren’t terribly interested in Noda Time per se.)

I’ve been looking at the "period" part of Noda Time recently, trying to redesign the API to simplify it somewhat. This part of the API is what we use to answer questions such as:

  • What will the date be in 14 days?
  • How many hours are there between now and my next birthday?
  • How many years, months and days have I been alive for?

I’ve been taking a while to get round to this because there are some tricky choices to make. Date and time arithmetic is non-trivial – not because of complicated rules which you may be unaware of, but simply because of the way calendaring systems work. As ever, time zones make life harder too. This post won’t talk very much about the Noda Time API details, but will give the results of various operations as I currently expect to implement them.

The simple case: arithmetic on the instant time line

One of the key concepts to understand when working with time is that the usual human "view" on time isn’t the only possible one. We don’t have to break time up into months, days, hours and so on. It’s entirely reasonable (in many cases, at least) to consider time as just a number which progresses linearly. In the case of Noda Time, it’s the number of ticks (there are 10 ticks in a microsecond, 10,000 ticks in a millisecond, and 10 million ticks in a second) since midnight on January 1st 1970 UTC.

Leaving relativity aside, everyone around the world can agree on an instant, even if they disagree about everything else. If you’re talking over the phone (using a magic zero-latency connection) you may think you’re in different years, using different calendar systems, in different time zones – but still both think of "now" as "634266985845407773 ticks".

That makes arithmetic really easy – but also very limited. You can only add or subtract numbers of ticks, effectively. Of course you can derive those ticks from some larger units which have a fixed duration – for example, you could convert "3 hours" into ticks – but some other concepts don’t really apply. How would you add a month? The instant time line has no concept of months, and in most calendars different months have different durations (28-31 days in the ISO calendar, for example). Even the idea of a day is somewhat dubious – it’s convenient to treat a day as 24 hours, but you need to at least be aware that when you translate an instant into a calendar that a real person would use, days don’t always last for 24 hours due to daylight savings.

Anyway, the basic message is that it’s easy to do arithmetic like this. In Noda Time we have the Instant structure for the position on the time line, and the Duration structure as a number of ticks which can be added to an Instant. This is the most appropriate pair of concepts to use to measure how much time has passed, without worrying about daylight savings and so on: ideal for things like timeouts, cache purging and so on.

Things start to get messy: local dates, times and date/times

The second type of arithmetic is what humans tend to actually think in. We talk about having a meeting in a month’s time, or how many days it is until Christmas (certainly my boys do, anyway). We don’t tend to consciously bring time zones into the equation – which is a good job, as we’ll see later.

Now just to make things clear, I’m not planning on talking about recurrent events – things like "the second Tuesday and the last Wednesday of every month". I’m not planning on supporting recurrences in Noda Time, and having worked on the calendar part of Google Mobile Sync for quite a while, I can tell you that they’re not fun. But even without recurrences, life is tricky.

Introducing periods and period arithmetic

The problem is that our units are inconsistent. I mentioned before that "a month" is an ambiguous length of time… but it doesn’t just change by the month, but potentially by the year as well: February is either 28 or 29 days long depending on the year. (I’m only considering the ISO calendar for the moment; that gives enough challenges to start with.)

If we have inconsistent units, we need to keep track of those units during arithmetic, and even request that the arithmetic be performed using specific units. So, it doesn’t really make sense to ask "how long is the period between June 10th 2010 and October 13th 2010" but it does make sense to ask "how many days are there between June 10th 2010 and October 13th 2010" or "how many years, months and days are there between June 10th 2010 and October 13th 2010".

Once you’ve got a period – which I’ll describe as a collection of unit/value pairs, e.g. "0 years, 4 months and 3 days" (for the last example above) you can still give unexpected behaviour. If you add that period to your original start date, you should get the original end date… but if you advance the start date by one day, you may not advance the end date by one day. It depends on how you handle things like "one month after January 30th 2010" – some valid options are:

  • Round down to the end of the month: February 28th
  • Round up to the start of the next month: March 1st
  • Work out how far we’ve overshot, and apply that to the next month: March 2nd
  • Throw an exception

All of these are justifiable. Currently, Noda Time will always take the first approach. I believe that JSR-310 (the successor to Joda Time) will allow the behaviour to be resolved according to a strategy provided by the user… it’s unclear to me at the moment whether we’ll want to go that far in Noda Time.

Arithmetic in Noda Time is easily described, but the consequences can be subtle. When adding or subtracting a period from something like a LocalDate, we simply iterate over all of the field/value pairs in the period, starting with the most significant, and add each one in turn. When finding the difference between two LocalDate values with a given set of field types (e.g. "months and days") we get as close as we can without overshooting using the most significant field, then the next field etc.

The "without overshooting" part means that if you add the result to the original start value, the result will always either be the target end value (if sufficiently fine-grained fields are available) or somewhere between the original start and the target end value. So "June 2nd 2010 to October 1st 2010 in months" gives a result of "3 months" even though if we chose "4 months" we’d only overshoot by a tiny amount.

Now we know what approach we’re taking, let’s look at some consequences.

Asymmetry and other oddities

It’s trivial to show some assymetry just using a period of a single month. For example:

  • January 28th 2010 + 1 month = February 28th 2010
  • January 29th 2010 + 1 month = February 28th 2010
  • January 30th 2010 + 1 month = February 28th 2010
  • February 28th 2010 – 1 month = January 28th 2010

It gets even more confusing when we add days into the mix:

  • January 28th 2010 + 1 month + 1 day = March 1st 2010
  • January 29th 2010 + 1 month + 1 day = March 1st 2010
  • March 1st 2010 – 1 month – 1 day = January 31st 2010

And leap years:

  • March 30th 2013 – 1 year – 1 month – 10 days = February 19th 2012 (as "February 30th 2012" is truncated to February 29th 2012)
  • March 30th 2012 – 1 year – 1 month – 10 days = February 18th 2012 (as "February 30th 2011" is truncated to February 28th 2011)

Then we need to consider how rounding works when finding the difference between days… (forgive the pseudocode):

  • Between(January 31st 2010, February 28th 2010, Months & Days) = ?
  • Between(February 28th 2010, January 31st 2010, Months & Days) = -28 days

The latter case is relatively obvious – because if you take a whole month of February 28th 2010 you end up with January 28th 2010, which is an overshoot… but what about the first case?

Should we return the determine the number of months by "the largest number such that start + period <= end"? If so, we get a result of "1 month" – which makes sense given the first set of results in this section.

What worries me most about this situation is that I honestly don’t know offhand what the current implementation will do. I think it would be best to return "28 days" as there isn’t genuinely a complete month between the two… <tappety tappety>

Since writing the previous paragraph, I’ve tested it, and it returns 1 month and 0 days. I don’t know how hard it would be to change this behaviour or whether we want to. Whatever we do, however, we need to document it.

That’s really at the heart of this: we must make Noda Time predictable. Where there are multiple feasible results, there should be a simple way of doing the arithmetic by hand and getting the same results as Noda Time. Of course, picking the best option out of the ones available would be good – but I’d rather be consistent and predictable than "usually right" be unpredictably so.

Think it’s bad so far? It gets worse…

ZonedDateTime: send in the time zones… (well maybe next year?)

I’ve described the "instant time line" and its simplicity.

I’ve described the local date/time complexities, where there’s a calendar but there’s no time zone.

So far, the two worlds have been separate: you can’t add a Duration to a LocalDateTime (etc), and you can’t add a Period to an Instant. Unfortunately, sooner or later many applications will need ZonedDateTime.

Now, you can think of ZonedDateTime in two different ways:

  • It’s an Instant which knows about a calendar and a time zone
  • It’s a LocalDateTime which knows about a time zone and the offset from UTC

The "offset from UTC" part sounds redundant at first – but during daylight saving transitions the same LocalDateTime occurs at two different instants; the time zone is the same in both cases, but the offset is different.

The latter way of thinking is how we actually represent a ZonedDateTime internally, but it’s important to know that a ZonedDateTime still unambiguously maps to an Instant.

So, what should we be able to do with a ZonedDateTime in terms of arithmetic? I think the answer is that we should be able to add both Periods and Durations to a ZonedDateTime – but expect them to give different results.

When we add a Duration, that should work out the Instant represented by the current DateTime, advance it by the given duration, and return a new ZonedDateTime based on that result with the same calendar and time zone. In other words, this is saying, "If I were to wait for the given duration, what date/time would I see afterwards?"

When we add a Period, that should add it to the LocalDateTime represented by the ZonedDateTime, and then return a new ZonedDateTime with the result, the original time zone and calendar, and whatever offset is suitable for the new LocalDateTime. (That’s deliberately woolly – I’ll come back to it.) This is the sort of arithmetic a real person would probably perform if you asked them to tell you what time it would be "three hours from now". Most people don’t take time zones into account…

In most cases, where a period can be represented as a duration (for example "three hours") the two forms of addition will give the same result. Around daylight saving transitions, however, they won’t. Let’s consider some calculations on Sunday November 7th 2010 in the "Pacific/Los_Angeles" time zone. It had a daylight saving transition from UTC-7 to UTC-8 at 2am local time. In other words, the clock went 1:58, 1:59, 1:00. Let’s start at 12:30am (local time, offset = -7) and add a few different values:

  • 12:30am + 1 hour duration = 1:30am, offset = -7
  • 12:30am + 2 hours duration = 1:30am, offset = -8
  • 12:30am + 3 hours duration = 2:30am, offset = -8
  • 12:30am + 1 hour period = 1:30am, offset = ???
  • 12:30am + 2 hour period = 2:30am, offset = -8
  • 12:30am + 3 hour period = 3:30am, offset = -8

The ??? value is the most problematic one, because 1:30 occurs twice… when thinking of the time in a calendar-centric way, what should the result be? Options here:

  • Always use the earlier offset
  • Always use the later offset
  • Use the same offset as the start date/time
  • Use the offset in the direction of travel (so adding one hour from 12:30am would give 1:30am with an offset of -7, but subtracting one hour from 2:30am would give 1:30am with an offset of -8)
  • Throw an exception
  • Allow the user to pass in an argument which represents a strategy for resolving this

This is currently unimplemented in Noda Time, so I could probably choose whatever behaviour I want, but frankly none of them has much appeal.

At the other daylight saving transition, when the clocks go forward, we have the opposite problem: adding one hour to 12:30am can’t give 1:30am because that time never occurs. Options in this case include:

  • Return the first valid time after the transition (this has problems if we’re subtracting time, where we’d presumably want to return the latest valid time before the transition… but the transition has an exclusive lower bound, so there’s no such "latest valid time" really)
  • Add the offset difference, so we’d skip to 2:30am
  • Throw an exception
  • Allow the user to pass in a strategy

Again, nothing particularly appeals.

All of this is just involved in adding a period to a ZonedDateTime – then the same problems occur all over again when trying to find the period between them. What’s the difference (as a Period rather than a simple Duration) between 1:30am with an offset of -7 and 1:30am with an offset of -8? Nothing, or an hour? Again, at the moment I really don’t know the best course of action.

Conclusion

This post has ended up being longer than I’d expected, but hopefully you’ve got a flavour of the challenges we’re facing. Even without time zones getting involved, date and time arithmetic is pretty silly – and with time zones, it becomes very hard to reason about – and to work out what the "right" result to be returned by an API should be, let alone implement it.

Above all, it’s important to me that Noda Time is predictable and clearly documented. Very often, if a library doesn’t behave exactly the way you want it to, but you can tell what it’s going to do, you can work around that – but if you’re having to experiment to guess the behaviour, you’re on a hiding to nothing.

The curious case of the publicity-seeking interface and the shy abstract class

Noda Time has a guilty secret, and I’m not just talking about the fact that there’s been very little progress on it recently. (It’s not dead as a project – I have high hopes, when I can put some quality time into it.) This secret is called LocalInstant, and it’s a pain in the neck.

One of the nice things about giving talks about an API you’re currently writing is that you can see which concepts make sense to people, and which don’t – as well as seeing which concepts you’re able to explain and which you can’t. LocalInstant has been an awkward type to explain right from day 1, and I don’t think it’s improved much since then. For the purpose of this blog post, you don’t actually need to know what it means, but if you’re really interested, imagine that it’s like a time-zone-less date and time (such as "10:58 on July 2nd 2015" but also missing a calendar system, so you can’t really tell what the month is etc. The important point is that it’s not just time-zone-less, but it’s actually local – so it doesn’t represent a single instant in time. Unlike every other concept in Noda Time, I haven’t thought of any good analogy between LocalInstant and the real world.

Now, I don’t like having types I can’t describe easily, and I’d love to just get rid of it completely… but it’s actually an incredibly powerful concept to have in the library. Not for users of course, but for the implementation. It’s spattered all over the place. Okay, the next best step to removing it is to hide it away from consumers: let’s make it internal. Unfortunately, that doesn’t work either, because it’s referred to in interfaces all the time too. For example, almost every member of ICalendarSystem has LocalInstant as one of its parameters.

The rules around interfaces

Just to recap, every member of an interface – even an internal interface – is implicitly public. That causes some interesting restrictions. Firstly, every type referred to in a public interface must be public. So this would be invalid:

internal struct LocalInstant {}

// Doesn’t compile: Inconsistent accessibility
public interface ICalendarSystem

    LocalInstant GetLocalInstant(int year, int month, int day);
}

So far, so good. It’s entirely reasonable that a public member’s declaration shouldn’t refer to an internal type. Calling code wouldn’t understand what LocalInstant was, so how could it possibly use ICalendarSystem sensibly? But suppose we only wanted to declare the interface internally. That should be okay, right? Indeed, the compiler allows the following code:

internal struct LocalInstant {}

// Compiles with no problems
internal interface ICalendarSystem
{
    LocalInstant GetLocalInstant(int year, int month, int day);
}

But hang on… isn’t GetLocalInstant public? That’s what I said earlier, right? So we’re declaring a public member using an internal type… which we thought wasn’t allowed. Is this a compiler bug?

Well, no. My earlier claim that "a public member’s declaration shouldn’t refer to an internal type" isn’t nearly precise enough. The important aspect isn’t just whether the member is declared public – but its accessibility domain. In this case, the accessibility domain of ICalendarSystem.GetLocalInstant is only the assembly, which is why it’s a valid declaration.

However, life becomes fun when we try to implement ICalendarSystem in a public class. It’s perfectly valid for a public class to implement an internal interface, but we have some problems declaring the method implementing GetLocalInstant. We can’t make it a public method, because at that point its accessibility domain would be anything referring to the assembly, but the accessibility domain of LocalInstant itself would still only be the assembly. We can’t make it internal, because it’s implementing an interface member, which is public.

There is an alternative though: explicit interface implementation. That comes with all kinds of other interesting points, but it does at least compile:

internal struct LocalInstant {}

internal interface ICalendarSystem
{
    LocalInstant GetLocalInstant(int year, int month, int day);
}

public class GregorianCalendarSystem : ICalendarSystem
{
    // Has to be implemented explicitly
    LocalInstant ICalendarSystem.GetLocalInstant(int year, int month, int day);
    {
        // Implementation
    }
}

So, we’ve got somewhere at this point. We’ve managed to make a type used within an interface internal, but at the cost of making the interface itself internal, and requiring explicit interface implementation within any public classes implementing the interface.

That could potentially be useful in Noda Time, but it doesn’t solve our real LocalInstant / ICalendarSystem problem. We need ICalendarSystem to be public, because consumers need to be able to specify a calendar when they create an instance of ZonedDateTime or something similar. Interfaces are just too demanding in terms of publicity.

Fortunately, we have another option up our sleeves…

Abstract classes to the rescue!

I should come clean at this point and say that generally speaking, I’m an interface weenie. Whenever I need a reusable and testable abstraction, I reach for interfaces by default. I have a general bias against concrete inheritance, including abstract classes. I’m probably a little too harsh on them though… particularly as in this case they do everything I need them to.

In Noda Time, I definitely don’t need the ability to implement ICalendarSystem and derive from another concrete class… so making it a purely abstract class will be okay in those terms. Let’s see what happens when we try:

internal struct LocalInstant {} 

public abstract class CalendarSystem

    internal abstract LocalInstant GetLocalInstant(int year, int month, int day);

internal class GregorianCalendarSystem : CalendarSystem
{  
    internal override LocalInstant GetLocalInstant(int year, int month, int day)
    { 
        // Implementation
    } 
}

Hoorah! Now we’ve hidden away LocalInstant but left CalendarSystem public, just as we wanted to. We could make GregorianCalendarSystem public or not, as we felt like it. If we want to make any of CalendarSystem‘s abstract methods public, then we can do so provided they don’t require any internal types. There’s on interesting point though: types outside the assembly can’t derive from CalendarSystem. It’s a little bit as if the class only provided an internal constructor, but with a little bit more of an air of mystery… you can override every method you can actually see, and still get a compile-time error message like this:

OutsideCalendar.cs(1,14): error CS0534: ‘OutsideCalendar’ does not implement inherited abstract member
        ‘CalendarSystem.GetLocalInstant(int, int, int)’

I can just imagine the author of the other assembly thinking, "But I can’t even see that method! What is it? Where is it coming from?" Certainly a case where the documentation needs to be clear. Whereas it’s impossible to create an interface which is visible to the outside world but can’t be implemented externally, that’s precisely the situation we’ve reached here.

The abstract class is a little bit like an authentication token given by a single-sign-on system. From the outside, it’s an opaque item: you don’t know what’s in it or how it does its job… all you know is that you need to obtain it, and then you can use it to do other things. On the inside, it’s much richer – full of useful data and members.

Conclusion

Until recently, I hadn’t thought of using abstract classes like this. It would possibly be nice if we could use interfaces in the same way, effectively limiting the implementation to be in the declaring assembly, but letting the interface itself (and some members) be visible externally.

A bigger question is whether this is a good idea in terms of design anyway. If I do make LocalInstant internal, there will be a lot of interfaces which go the same way… or become completely internal. For example, the whole "fields" API of Noda Time could become an implementation detail, with suitable helper methods to fetch things like "how many days are there in the given month." The fields API is an elegant overall design, but it’s quite complicated considering the very limited situations in which most callers will use it.

I suspect I will try to go for this "reduced API" for v1, knowing that we can always make things more public later on… that way we give ourselves a bit more flexibility in terms of not having to get everything right first time within those APIs, too.

Part of me still feels uncomfortable with the level of hiding involved – I know other developers I respect deeply who hide as little as possible, for maximum flexibility – but I do like the idea of an API which is really simple to browse.

Aside from the concrete use case of Noda Time, this has proved an interesting exercise in terms of revisiting accessibility and the rules on what C# allows.

You are all individuals! (I’m not…)

I’ve been meaning to post this for a while, but recently a couple of events have coincided, reminding me about the issue.

First, Joe Duffy blogged in defence of premature optimization. Second, I started reading Bill Wagner’s Effective C#, 2nd edition, which contains advice such as "make almost all your types serializable". Now, let’s be clear: I have a great deal of respect for both of these gentlemen… but in both cases I think there’s a problem: to some extent they’re assuming a certain type of development.

In some cases, you really, really want to understand the nuts and bolts of every bit of performance. If, for example, you’re writing a parallelization library to be the part of the .NET framework. For Noda Time I’m pretty obsessed with performance, too – I really want it to be very fast indeed. And to be clear, Joe does give a certain amount of balance in the article… but I think it’s probably still biased due to his background on working on libraries where it really, really matters. For many developers, it’s vastly preferable to have the in-house HR web app used by 100 people take a little bit more time to process each request than to take an extra few days of developer work (cumulative) making sure that every little bit of it is as fast as possible. And many of the questions I’ve seen on Stack Overflow are asking for micro-optimizations which are really, really unlikely to matter. (EDIT: Just to be clear, there’s a lot of stuff I agree with in Joe’s post, but I think enough of us find correctness hard enough to start with, without having to consider every possible performance hit of every statement. At least some of the time.)

Likewise for a certain class of development, it probably does make sense to make most types serializable. If most of your objects are modelling data, serialization really will be a major factor. For other people, it won’t be. Most of my working life has been spent writing code which really doesn’t need to serialize anything… or which uses Protocol Buffers for serialization, in order to preserve portability, compactness and flexible versioning. Very few of my types should really be serializable using the platform-default binary serialization (whether in Java or .NET). Relatively few of them need to be serializable at all.

Finally, I’ll mention another example I’ve probably been guilty of: the assumption that a "public API" really can’t be changed without major hassle. An example of this is making a "public const" in C#, and later wanting to change the value of it. "No," I hear you cry… "Make it a public static readonly field instead, to avoid callers baking the value into their compiled code." Absolutely. If you’re in a situation where you may well not know all of your callers, or can’t recompile them all on every deployment, that’s great advice. But I suspect a lot of developers work in environments where they can recompile everything – where the only code which calls their code is written within the same company, and deployed all in one go.

In short, we’re not all writing system libraries. We’re not all writing data-driven business apps. We’re not all writing the same kind of code at all. Good "one size fits all" advice is pretty rare, and "we" (the community preaching best practices etc) should take that into account more often. I absolutely include myself in that chastisement, too.

Mini-post: abstractions vs repetition; a driving analogy

Driving Tom back from a children’s party this afternoon, I was thinking about Noda Time.

I’ve been planning to rework the parsing/formatting API, so that each chronological type (ZonedDateTime, LocalDateTime, LocalDate, LocalTime) has its own formatter and parser pair. I suspect this will involve quite a bit of similar code between the various classes… but code which is easy to understand and easy to test in its simple form.

The question which is hard to answer before the first implementation is whether it will be worth trying to abstract out that similar code to avoid repetition. In my experience, quite often something that sounds like a good idea in terms of abstraction ends up becoming significantly more complex than the "just implement each class separately" approach… but we’ll see.

Around this time, I ended up stuck behind a car which was going at 20mph between speed bumps, decreasing to 10-15mph near the speed bumps, of which there were quite a few. I could have tried overtaking, but visibility wasn’t great, and I wasn’t far from home anyway. I regard overtaking as a somewhat extreme measure when I’m not on a dual carriageway. It was frustrating, but I knew I wouldn’t be losing very much. The risks involved in overtaking (combined with the possibility that I’d end up stuck behind someone else) weren’t worth the small benefit of getting past the car in front.

It struck me that the two situations were somewhat similar. I know from experience that trying to extract a general purpose abstraction to avoid repetition can be risky: corner cases where the abstraction just doesn’t work can hide themselves until quite late in the proceedings, or you may find that you end up with nearly as much repetition anyway due to something else. The repetitive code is a burden, but one which is more easily reckoned with.

I suspect I won’t try too hard to abstract out lots of the code for formatting and parsing in Noda Time. I’m sure there’ll be quite a lot which can be easily factored out, but anything which is non-obvious to start with can wait for another time. For the moment, I’ll drive slowly but steadily, without any flashy moves.

Documentation with Sandcastle – a notebook

(Posted to both my main code blog and the Noda Time blog.)

I apologise in advance if this blog post becomes hard to get good information from. It’s a record of trying to get Sandcastle to work for Noda Time; as much as anything it’s meant to be an indication of how smooth or otherwise the process of getting started with Sandcastle is. My aim is to be completely honest. If I make stupid mistakes, they’ll be documented here. If I have decisions to make, they’ll be documented here.

I should point out that I considered using NDoc (it just didn’t make sense to use a dead project) and docu (I’m not keen on the output style, and it threw an exception when I tried running it on Noda Time anyway). I didn’t try any other options as I’m unaware of them. Hmm.

Starting point and aims

My eventual aim is to include "build the docs" as a task in the build procedure for Noda Time. I don’t have much in the way of preconceived ideas of what the output should be: my guess is a CHM file and something to integrate into MSDN, as well as some static web pages. Ideally I’d like to be able to put the web pages on the Google Code project page, but I don’t know how feasible that will be. If the best way forward turns out to be something completely different, that’s fine.

(I’ve mailed Scott Hanselman and Matt Hawley about the idea of having an ASP.NET component of some form which could dynamically generate all this stuff on the fly – you’d just need to upload the .xml and .dll files, and let it do the rest. I’m not expecting that idea to be useful for Noda Time in the near future, but you never know.)

Noda Time has some documentation, but there are plenty of public members which don’t have any XML documentation at all at the moment. Obviously there’s a warning available for this so we’ll be able to make sure that eventually everything’s documented, but we also need to be able to build documentation before we reach that point.

Step 0: building the XML file

The build project doesn’t currently even create the .xml file, so that’s the first port of call – just a case of ticking a box and then changing the default filename slightly… because for some bizarre reason, Visual Studio defaults to creating a ".XML" file instead of ".xml". Why? Where else are capitals used in file extensions?

Rebuild the solution, gaze in fear at the 496 warnings generated, and we have everything we should need from Visual Studio. My belief is that I should now be able to close Visual Studio and not reopen it (with the Noda Time solution, anyway) during the course of this blog post.

Step 1: building Sandcastle

First real choice: which version of Sandcastle do I go for? There was a binary release on May 29th 2008, a source release on July 2nd 2008, and three commits to source control since then, the latest of which was in July 2009. Personally I like the idea of not having to actually install anything: tools which can just be run without installation are nicer for Open Source projects, particularly if you can check the binaries into source control with appropriate licence files. That way anyone can build after just fetching. On the other hand, I’m not sure how well the Sandcastle licence fits in with the Apache 2 licence we’re using for Noda Time. I can investigate that later.

What the heck, let’s try building it from source. It’s probably easier to go from that to the installed version than the other way round. Change set 26202 downloaded and unpacked… now how do we build it, and what do we need to build? Okay, there’s a solution file, which opens up in VS2008 (unsurprising and not a problem). Gosh, what a lot of projects (but no unit tests?) – still, everything builds with nary a warning. I’ve no idea what to do with it now, but it’s a start. It looks like it’s copied four executables and a bunch of DLLs into the ProductionTools directory, which is promising.

Shockingly, it’s only just occurred to me to check for some documentation to see whether or not I’m doing the right thing. Looking at the Sandcastle web page, it seems I’m not missing much. Well, I was aware that this might be the case.

Step 2: Sandcastle Help File Builder

I’ve heard about SHFB from a few places, and it certainly sounds like it’s the way to go – it even has a getting started guide and installation instructions! It looks like there’s a heck of a lot of documentation for something sounds like it should be simple, but hey, let’s dive in. (I know it sounds inconsistent to go from complaining about no documentation to complaining about too much – but I’m really going from complaining about no documentation to complaining about something being so complicated that it needs a lot of documentation. I’m very grateful to the SHFB team for documenting everything, even if I plan to read it on a Just-In-Time basis.)

A few notes from the requirements page:

  • It looks like I’ll need to install the HTML Help Workshop if I want CHM files; the Help 2 compiler should already be part of the VS2008 SDK which I’m sure is already installed. I have no idea where Help 3 fits into this :(
  • It looks like I need a DXROOT environment variable pointing at my Sandcastle "installation". I wonder what that means in my home-built version? I’ll assume it just means the Development directory containing ProductionTools and ProductionTransforms.
  • There’s a further set of patches available in the Sandcastle Styles project. Helpfully, this says it includes all the updates in the July 2009 source code, and can be applied to the binary installation from May 2008. It’s not clear, however, whether it can also be applied to a home-built version of Sandcastle. Given that I can get all the latest stuff in conjunction with an installed version, it sounds like it’s worth installing the binary release after all. (Done, and patches installed.)
  • It sounds like I need to install the H2 Viewer and H2Reg. (I suspect that H2Reg will be something we direct our users at rather than shipping and running ourselves; I don’t intend to have an MSI-style "installer" for Noda Time at the moment, although the recent CoApp announcement sounds interesting. It’s too early to worry about that for the moment though.)
  • We’re not documenting a web site project, so I’m not bothering with "Custom Web Code Providers". I’ve installed quite enough by this point, thank you very much. Oh, except I haven’t installed SHFB itself yet. I’d better do that now…

Step 3: creating a Help File Builder project

This feels like it could be reasonably straightforward, so long as I don’t try to do anything fancy. Let’s follow (roughly) the instructions. (I’m doing it straight to Noda Time rather than using the example project.)

Open the GUI, create a new project, add a documentation source of NodaTime.csproj… and hit Build Project. Wow, this takes quite a while – and this is a pretty beefy laptop. However, it seems to work! I have a CHM file which looks like it includes all the right stuff. Hoorah! It’s a pretty huge CHM file (just over 3MB) for a relatively small project, but never mind.

Let’s build it again, this time with all the output enabled (Help 1, Help 2, MSHelpViewer and Website).

Hmm… no MS Help 2 compiler found. Maybe I didn’t have the VS2008 SDK installed after all. After a bit of hunting, it’s here. Time to install it – and make sure it doesn’t mess up the Sandcastle installation, as the SHFB docs warned me about. Yikes – 109MB. Ah well.

Okay, so after the SDK installation, rebuild the help… which will take even longer of course, as it’s now building four different output formats. 3 minutes 18 seconds in the end… not too bad, but not something I’ll want to do after every build :)

Step 4: checking the results

  • Help 1 (CHM): looks fine, if old-fashioned :)
  • Help 2 (HxS): via H2Viewer, looks fine – I’m not sure whether I dare to integrate it with MSDN just yet though.
  • ASP.NET web site: works even in Chrome
  • Static HTML: causes Chrome to flicker, constantly reloading. Works fine in Firefox. Maybe I need to submit a bug report.

I’m not entirely sure which output option corresponds to which result here; in particular, is "Website" the static one or the ASP.NET one? What’s MSHelpViewer? It’s easy enough to find out of course – I’ll just experiment at a later date.

Step 5: building from the command line

I can’t decide whether this is crucial (as it should be part of a continuous build server) or irrelevant (as there are so many tools to install, I may never get the ability to run a CB server with everything installed). However, it would certainly be nice.

Having set SHFBROOT appropriately, running msbuild gives this error:

SHFB : error BE0067: Unable to obtain assembly name from project file ‘[…]’ using Configuration ‘Debug’, Platform ‘X64’

Using Debug is definitely correct, but X64 sounds wrong… I suspect I want AnyCPU instead. Fortunately, this can be set in the SHFB project file (it was previously just defaulting). Once that’s been fixed, the build works with one warning: BHT0001: Unable to get executing project: Unable to obtain internal reference. Supposedly this may indicate a problem in SHFB itself… I shall report it later on. It doesn’t seem to affect the ability to produce help though.

Conclusion

That wasn’t quite as painful as I’d feared. I’m nearly ready to check in the SHFB project file now – but I need to work out a few other things first, and probably create a specific "XML" configuration for the main project itself. I’m somewhat alarmed at the number of extra bits and pieces that I had to install though – and the lack of any mention of Help 3 is also a bit worrying.

I’ve just remembered one other option that I haven’t tried, too – MonoDoc. I may have another look at that at a later date, although the fact that it needs a GTK# help viewer isn’t ideal.

I still think the Open Source community for .NET has left a hole at the moment. It may be that SHFB + Sandcastle is as good as it gets, given the limitations of how much needs to be installed to build MS help files. I’d still like to see a better way of providing docs for web sites though… ideally one which doesn’t involve shipping hundreds of files around when only two are really required.

Custom value types are like buses

You wait years to write one… and then six of them come along at once.

(Cross-posted to the Noda Time blog and my coding blog as it’s relevant to both.)

When we started converting Joda Time to .NET, there was always going to be the possibility of using custom value types (structs) – an opportunity which isn’t available in Java. This has meant reducing the type hierarchy a fair amount, but that’s actually made things simpler. However, I didn’t realise quite how many we’d end up with – or how many would basically just wrap a long.

So far, we have 4 value types whose only field is a long. They are:

  • Instant: an instant on the theoretical timeline, which doesn’t know about days of the week, time zones etc. It has a single reference point – the Unix epoch – but that’s only so that we can represent it in a concrete fashion. The long represents the number of ticks since the Unix epoch.
  • LocalInstant: this is a tricky one to explain, and I’m still looking for the right way of doing so. The basic idea is that it represents a day and a time within that day, but without reference to a time zone or calendar system. So if I’m talking to someone in a different time zone and an Islamic calendar, we can agree on the idea of "3pm tomorrow" even if we have different ideas of what month that’s in and when it starts. A LocalInstant is effectively the instant at which that date/time would occur if you were considering it in UTC… but importantly it’s not a genuine instant, in that it doesn’t unambiguously represent a point in time.
  • Duration: a number of ticks, which can be added to an instant to get another instant. This is a pure number of ticks – again, it doesn’t really know anything about days, months etc (although you can find the duration for the number of ticks in a standard day – that’s not the same as adding one day to a date and time within a time zone though, due to daylight savings).
  • Offset: very much like a duration, but only used to represent the offset due to a time zone. This is possibly the most unusual of the value types in Noda, because of the operators using it. You can add an offset to an instant to get a local instant, or you can subtract an offset from a local instant to get an instant… but you can’t do those things the other way round.

The part about the odd nature of the operators using Offset really gets to the heart of what I like about Joda Time and what makes Noda Time even better. You see, Joda Time already has a lot of types for different concepts, where .NET only has DateTime/DateTimeOffset/TimeSpan – having these different types and limiting what you can do with them helps to lead developers into the pit of success; the type system helps you naturally get things right.

However, the Joda Time API uses long internally to represent all of these, presumably for the sake of performance: Java doesn’t have custom value types, so you’d have to create a whole new object every time you manipulated anything. This could be quite significant in some cases. Using the types above has made the code a lot simpler and more obviously correct – except for a couple of cases where the code I’ve been porting appears to do some very odd things, which I’ve only noticed are odd because of the type system. James Keesey, who’s been porting the time zone compiler, has had similar experiences: since introducing the offset type with its asymmetric operators, found that he had a bug in some of his ported code – which immediately caused a compile-time error when he’d converted to using offsets.

When I first saw the C# spec, I was dubious about the value of user-defined value types and operator overloading. Indeed I still suspect that both features are overused… but when they’re used appropriately, they’re beautiful.

Noda Time is still a long way from being a useful API, but I’m extremely pleased with how it’s shaping up.

Noda Time gets its own blog

I’ve decided it’s probably not a good idea to make general Noda Time posts on my personal blog. I’ll still post anything that’s particularly interesting in a "general coding" kind of way here, even if I discover it in Noda Time, but I thought it would be good for the project to have a blog of its very own, which other team members can post to.

I still have plenty of things I want to blog about here. Next up is likely to be a request for help: I want someone to tell me why I should love the "dynamic" bit of dynamic languages. Stay tuned for more details :)

Noda Time is born

There was an amazing response to yesterday’s post – not only did readers come up with plenty of names, but lots of people volunteered to help. As a result, I’m feeling under a certain amount of pressure for this project to actually take shape.

The final name chosen is Noda Time. We now have a Google Code Project and a Google Group (/mailing list). Now we just need some code…

I figured it would be worth explaining a bit more about my vision for the project. Obviously I’m only one contributor, and I’m expecting everyone to add there own views, but this can act as a starting point.

I want this project to be more than just a way of getting better date and time handling on .NET. I want it to be a shining example of how to build, maintain and deploy an open source .NET library. As some of you know, I have a few other open source projects on the go, and they have different levels of polish. Some have downloadable binaries, some don’t. They all have just-about-enough-to-get-started documentation, but not nearly enough, really. They have widely varying levels of test coverage. Some are easier to build than others, depending on what platform you’re using.

In some ways, I’m expecting the code to be the easy part of Noda Time. After all, the implementation is there already – we’ll have plenty of interesting design decisions to make in order to marry the concepts of Joda Time with the conventions of .NET, but that shouldn’t be too hard. Here are the trickier things, which need discussion, investigation and so forth:

  • What platforms do we support? Here’s my personal suggested list:
    • .NET 4.0
    • .NET 3.5
    • .NET 2.0SP1 (require the service pack for DateTimeOffset)
    • Mono (versions TBD)
    • Silverlight 2, 3 and 4
    • Compact Framework 2.0 and 3.5
  • What do we ship, and how do we handle different platforms? For example, can we somehow use Code Contracts to give developers a better experience on .NET 4.0 without making it really hard to build for other versions of .NET? Can we take advantage of the availability of TimeZoneInfo in .NET 3.5 and still build fairly easily for earlier versions? Do developers want debug or release binaries? Can we build against the client profile of .NET 3.5/4.0?
  • What should we use to build? I’ve previously used NAnt for the overall build process and MSBuild for the code building part. While this has worked quite well, I’m nervous of the dependency on NAnt-Contrib library for the <msbuild> task, and generally being dependent on a build project whose last release was a beta nearly two years ago. Are there better alternatives?
  • How should documentation be created and distributed?
    • Is Sandcastle the best way of building docs? How easy is it to get it running so that any developer can build the docs at any time? (I’ve previously tried a couple of times, and failed miserable.)
    • Would Monodoc be a better approach?
    • How should non-API documentation be handled? Is the wiki which comes with the Google Code project good enough? Do we need to somehow suck the wiki into an offline format for distribution with the binaries?
  • What do we need to do in order to work in low-trust environments, and how easily can we test that?
  • What do we do about signing? Ship with a "public" snk file which anyone can build with, but have a private version which the team uses to validate a "known good" release? Or just have the private key and use deferred signing?
  • While the library itself will support i18n for things like date/time formatting, do we need to apply it to "developer only" messages such as exceptions?
  • I’m used to testing with NUnit and Rhino.Mocks, but they’re not the last word in testing on .NET – what should we use, and why? What about coverage?
  • Do we need any dependencies (e.g. logging)? If so, how do we handle versioning of those dependencies? How are we affected by various licences?

These are all interesting topics, but they’re not really specific to Noda Time. Information about them is available all over the place, but that’s just the problem – it’s all over the place. I would like there to be some sort of documentation saying, "These are the decisions you need to think about, here are the options we chose for Noda Time, and this is why we did so." I don’t know what form that documentation will take yet, but I’m considering an ebook.

As you can tell, I’m aiming pretty high with this project – especially as I won’t even be using Google’s 20% time on it. However, there’s little urgency in it for me personally. I want to work out how to do things right rather than how to do them quickly. If it takes me a bit of time to document various decisions, and the code itself ships later, so be it… it’ll make the next project that much speedier.

I’m expecting a lot of discussion in the group, and no doubt some significant disagreements. I’m expecting to have to ask a bunch of questions on Stack Overflow, revealing just how ignorant I am on a lot of the topics above (and more). I think it’ll be worth it though. I think it’s worth setting a goal:

In one year, I want this to be a first-class project which is the natural choice for any developers wanting to do anything more than the simplest of date/time handling on .NET. In one year, I want to have a guide to developing open source class libraries on .NET which tells you everything you need to know other than how to write the code itself.

A year may seem like a long time, but I’m sure everyone who has expressed an interest in the project has significant other commitments – I know I do. Getting there in a year is going to be a stretch – but I’m expecting it to be a very enlightening journey.