New and improved JonSkeet.DemoUtil

It’s amazing how sometimes small changes can make you very happy.

This week I was looking at how DragonFruit does its entry point magic, and realized I had a great use case for the same kind of thing.

Some of my oldest code that’s still in regular use is ApplicationChooser – a simple tool for demos at conferences and user groups. Basically it allows you to write multiple classes with Main methods in a single project, and when you run it, it allows you to choose which one you actually want to run.

Until today, as well as installing the NuGet package, you had to create a Program.cs that calling ApplicationChooser.Run directly, and then explicitly set that as the entry point. But no more! That can all be done via build-time targets, so now it’s very simple, and there’s no extraneous code to explain away. Here’s a quick walkthrough for anyone who would like to adopt it for their own demos.

Create a console project

It’s just a vanilla console app…

$ mkdir FunkyDemo
$ cd FunkyDemo
$ dotnet new console

Add the package, and remove the previous Program.cs

You don’t have to remove Program.cs, but you probably should. Or you could use that as an entry point if you really want.

$ dotnet add package JonSkeet.DemoUtil
$ rm Program.cs

Add demo code

For example, add two files, Demo1.cs and Demo2.cs

// Demo1.cs
using System;

namespace FunkyDemo
{
    class Demo1
    {
        static void Main() =>
            Console.WriteLine("Simple example without description");
    }
}

// Demo2.cs
using System;
using System.ComponentModel;
using System.Threading.Tasks;

namespace FunkyDemo
{
    // Optional description to display in the menu
    [Description("Second demo")]
    class Demo2
    {
        // Async entry points are supported,
        // as well as the original command line args
        static async Task Main(string[] args)
        {
            foreach (var arg in args)
            {
                Console.WriteLine(arg);
                await Task.Delay(500);
            }
        }
    }
}

Run!

$ dotnet run -- abc def ghi
0: Demo1
1: [Second demo] Demo2

Entry point to run (or hit return to quit)?

Conclusion

That’s all there is to it – it’s simple in scope, implementation and usage. Nothing earth-shattering, for sure – but if you give lots of demos with console applications, as I do, it makes life a lot simpler than having huge numbers of separate projects.

Reducing my international speaking

I’ve been immensely privileged to be invited to speak at various international developer conferences, and until now I’ve usually tried to accept the majority of those invitations. I’ve had a wonderful time, and made many dear friends – who I’ve often then caught up with at other events.

However, I’ve recently found that travelling has become increasingly disruptive for me as a human – mostly in terms of missing my family. Additionally, I’m finding it hard to justify taking so many flights when it comes to the environmental cost of flying.

I still intend to do some international speaking (assuming I’m still invited, of course) and probably more UK-based talks. Additionally, I’m very happy to work with any conferences who’d be interested in me speaking remotely via a live stream. I’m hoping that the future of developer conferences is a mixture of in-person talks (which absolutely still have their place) and remote talks which retain the element of interactivity with attendees. I’m more than happy to get up in the middle of the night to fit in with a schedule, etc.

I’ll be linking to this post if/when I decline invitations on these grounds, partly as a way of demonstrating “It’s not you, it’s me.” If you’re reading this post in that context, please understand that I wish you all the best for your conference, and I’m pretty confident that you’ll be able to find a much more interesting speaker than me anyway :)

V-Drum Explorer: MIDI interface

If this is the first blog post about V-Drum Explorer you’ve read, see the first post in this series for the background. In this post we’ll look at the MIDI interface I use in V-Drum Explorer to send and receive configuration information.

MIDI basics

(Apologies to anyone who really knows their stuff about MIDI – this section is the result of bits of reading and experience working with my drum kit. Please leave corrections in comments!)

MIDI is a protocol for communications between musical instruments, computers, synthesizers and the like. There are many different ways of connecting devices together, including dedicated MIDI cables, USB, TCP/IP and Bluetooth, but applications shouldn’t need to care much about this. They should only need to worry about the messages defined by the protocol.

MIDI is one-directional, in that APIs expose both input ports and output ports which are effectively independent even if they’re connected between the same pair of devices. Messages are sent on output ports and received on input ports.

Most electronic instruments (of all kinds, not just drums) support at least basic MIDI functionality. There are several simple messages with standard representations, including:

  • Channel Voice messages (such as “note on” and “note off”)
  • Channel Mode messages (such as “all sounds off”)
  • System Exclusive messages which are manufacturer-specific

The channel voice and channel mode messages apply to a specific MIDI channel. It’s possible for multiple devices to be daisy-chained together so that they can be controlled over a single port, but using different channels. Each of these messages has a fixed size.

Note: one irritating problem with the protocol is that I haven’t found a way of detecting which channel a particular device is on. This is why channel-specific operations in V-Drum Explorer (such as recording instrument sounds) need a user interface element. If there’s something I’ve missed here, I’d love to hear about it.

Only a few operations in V-Drum Explorer need to simulate or react to the channel messages. Most of the time we’re interested in the system exclusive messages which allow the configuration data to be fetched and set.

System Exclusive Messages

System Exclusive Messages – often abbreviated to SysEx – are manufacturer-specific messages for additional functionality. The size of the data isn’t fixed, but is indicated by a byte of 0xF0 indicating the start of the data, and another byte of 0xF7 indicating the end of the data. Each byte within the data has to have the top bit cleared.

The first byte of data is conventionally the manufacturer ID as assigned by the MIDI Manufacturers Association, which allows a device to ignore messages it wouldn’t properly understand. For example, all Roland SysEx messages have a first byte of 0x41.

Protocol design note: MIDI is a relatively old protocol, dating back to 1983. I’d hope that these days we’d allow more than one byte for a manufacturer ID, and use a length-prefix instead of restricting every data byte to 7 useful bits. Oh well.

SysEx messages are divided into realtime and non-realtime messages. There are also universal SysEx messages which are non-manufacturer-specific, using “fake” manufacturer IDs of 0x7E (for non-realtime) and 0x7F (for realtime).

For the V-Drum Explorer, we’re interested in four SysEx messages:

  • We send Identity Request (universal, non-realtime)
  • We receive Identity Reply (universal, non-realtime)
  • We send Data Request (Roland, non-realtime)
  • We send and receive Data Set (Roland, non-realtime)

The identity request and identity reply messages are used to discover information about the connected devices. V-Drum Explorer uses them to check whether you have any supported devices connected (currently just the TD-17 and TD-50) and present the appropriate schema for the device.

The Data Request and Data Set messages are more interesting from the perspective of V-Drum Explorer: they’re the messages used to communicate the configuration data.

You can think of the configuration data as a bank of memory in the module. (We’ll look at the layout next time.) The Data Request message indicates “Give me X bytes starting at address Y.” The Data Set message indicates “Here are some bytes starting at address X.” Interestingly, Data Set is used both to respond to Data Request and to set new data in the module. When loading data from the device, I send a Data Request and wait for a corresponding Data Set. When copying data to the device, I just send a Data Set message. The device also sends Data Set messages when the configuration data is modified on the module itself (if a particular setting is turned on).

There are a few tricky aspects to this though, all effectively related to the protocol being two independent streams rather than being request/response in the way that HTTP is, for example:

  • Can the code rely on all the data from a single Data Request being returned in a single Data Set? (Answer: no in theory, but in practice if you ask for small enough chunks, it’s fine.)
  • How long should the code wait for after sending an Identity Request until it can assume it’s seen all the Identity Reply messages it’s going to?
  • How long should the code wait for after sending a Data Request until it can assume there’s something wrong if it doesn’t get a reply?
  • How long should the code wait between sending Data Set messages when copying data to the device?
  • What should we do with messages we didn’t initiate?

V-Drum Explorer MIDI code

In the V-Drum Explorer MIDI code, I’ve modeled Identity Request / Identity Reply using an event for “I’ve seen this device”, and I just leave half a second for all devices to respond. For Data Request / Data Set things are a little trickier. For better or worse, I expose an asynchronous API along the lines of

async Task<byte[]> RequestDataAsync(int address, int size)

I keep a collection of “address/size pairs I’m waiting to see data about”, and when I receive a Data Set message I see whether I’m expecting it, I complete the corresponding task, with a buffer of unexpected messages for diagnostic purposes.

The current implementation uses the Sanford.Multimedia.Midi library, which at least assembles multiple packets into SysEx messages for me. I’m trying to move to managed-midi though, as that’s more cross-platform. (In particular, it should make a Xamarin client possible, as well as running a CLI on a Raspberry Pi.) The managed-midi library is quite low-level, so I’ll need to perform the reassembly myself – but that shouldn’t be too hard.

One thing I’m really hoping is that I can reimplement the VDrumExplorer.Midi project with almost no changes to VDrumExplorer.Wpf.

Next time…

In the next post, I’ll go into more detail about the layout of the configuration data, and the annoyance of 7-bit addressing.

V-Drum Explorer: Introduction

This is the first in what I expect to be quite a long series of blog posts, meandering over a fairly wide range of topics as they crop up. There’s nothing particularly technical in this introductory post. It’s really just a starting point.

V-Drums

In July 2019, inspired/encouraged by a friend name Alice, I bought an electronic drum kit. It was a Roland TD-17KV; I now have the Roland TD-17KVX because I’m a sucker for upgrades.

Both of these are kits using the TD-17 as the module. That’s the part of an electronic drum kit that turns electrical signals from the triggers on the pads into sound and/or MIDI messages. The module is sometimes known as the “brain” of the kit. There’s quite a lot of terminology involved in electronic drum kits, and I suspect I’ve barely scratched the surface in the few months since I’ve started.

The TD-17 series is just one option within a suite of products called Roland V-Drums. The TD-17 is currently the most recent part of this suite. It’s a mid-tier product; currently the high-end kit is the TD-50 and the entry level is the TD-1. When I was choosing between the TD-1 and thd TD-17 (with the TD-50 never being an option) I was basically drawn to the greater flexibility of the TD-17 module over the TD-1. I didn’t know how feasible it would be to configure it programmatically, but it at least felt like something I’d have fun investigating.

Configuring a TD-17

In order to understand why I was even considering coding for this, it’s worth having a look at what you can configure on the TD-17 module.

There are a few top-level things you rarely need to configure, such as which triggers you have attached to the module. Most of the configuration happens within a kit – and there are lots of kits. Now this is already slightly confusing because right at the start I talk about buying an “electronic drum kit”. The term is effectively overloaded, unfortunately, but most of the time in this blog series I’ll be using the term “kit” in the sense it’s used within the module.

The TD-17 supports 100 kits. There are 50 presets, and 50 user-defined kits. There’s no real difference between the presets and user-defined kits in terms of what you can do with them – you can think of it as “There are 100 kits available, but Roland got bored after creating 50 of them and left the other 50 in a default configuration.” A single kit is selected at a time, and this controls everything about how the drums sound.

At the top level of a kit, you can configure:

  • Its name and volume
  • The instruments (this is going to be another overloaded term)
  • Its MIDI configuration (which MIDI instruments are triggered)
  • Its ambience (simulations for being in a concert hall, or arena etc)
  • Its MultiFX (sound effects such as reverb)

There are 20 “instruments” to configure – one for the kick pedal, two for most other triggers (snare, toms, crash cymbals, hi-hat, and a general purpose aux), and three for the ride cymbal. There are two instruments for most triggers because you can configure the head and the edge/rim separately – as an extreme example, you could set the crash cymbal to sound like a gong when you hit the rim, and a snare drum when you hit the head. You probably wouldn’t want to, but you could.

For each of these instruments, you can set:

  • The left/right pan (primarily for playing with headphones, so everything sounds like it’s in the right place)
  • Equalizer settings (low/medium/high frequency response)
  • The instrument (this is the overloading…) – for example “13 inch maple tom”
  • The “sub-instrument” which is a second instrument layered under the first (e.g. the first preset kit has a beech kick with a sub-instrument of a deep shell kick) and how the two instruments are layered together
  • How much the ambience and sound effects affect this particular instrument

Each main/sub instrument then has its own extra tweaks in terms of tuning/size, muffling, and snare buzz – depending on the instrument.

Of course, you can just play the TD-17 with the preset kits, never going into any of this. But I saw possibilities… the user interface is designed very well considering the physical constraints, but I tend to think that complex configuration like this is more easily managed on a computer.

V-Drum Explorer

And so the idea of the V-Drum Explorer was born. Of course, it’s open source – it’s part of my demo code GitHub repository. There’s even documentation with screenshots so you can get the general idea, and a Windows installer downloadable from the GitHub releases page.

From the very start, I imagined a user interface with a tree view on the left and details on the right. It’s possible that by thinking of that from that early on, I’ve missed out on much better UI options.

This blog series is partially a diary of what I’ve done and the challenges I’ve faced, and partly a set of thoughts about how this affects professional development.

I expect to blog about:

  • The MIDI protocol used to communicate with the TD-17
  • The configuration data model
  • The challenges of 7-bit addressing
  • Enabling Hi-DPI for .NET Core 3 WinForms apps
  • Initial code vs schema considerations
  • The transition to a full schema
  • Displaying data more usefully: the logical tree
  • Working with real users: remote diagnostics
  • Immutabilty and cyclic references
  • Performance: WPF vs WinForms
  • Performance: field instances
  • Code signing
  • Windows installers
  • (Maybe) Xamarin and BLE MIDI
  • Moving back from schema JSON to code for ViewModels

That’s likely to take a very long time, of course. I hope it proves as interesting to read about as it has been to implement.

Options for .NET’s versioning issues

This post revisits the problem described in Versioning Limitations in .NET, based on reactions to that post and a Twitter discussion which occurred later.

Before getting onto the main topic of the post, I wanted to comment a little on that Twitter discussion. I personally found it frustrating at times, and let that frustration leak out into some of my responses. As Twitter debates go, this was relatively mild, but it was still not as constructive as it might have been, and I take my share of responsibility for that. Sorry, folks. I’m sure that everyone involved – both in that Twitter discussion and more generally in the .NET community – genuinely wants the best outcome here. I’ve attempted to frame this post with that thought at the top of mind, assuming that all opinions on the topic are held and expressed in good faith. As you’ll see, that doesn’t mean I have to agree with everyone, but it hopefully helps me respect arguments I disagree with. I’m happy to make corrections (probably with some sort of history) if I misrepresent things or miss out some crucial pros/cons. The goal of this post is to help the community weigh up options as pragmatically as possible.

Scope, terminology and running example

There are many aspects to versioning, of course. In the future I plan to blog about some interesting impacts of multi-targeting libraries, and the choices involved in writing one library to effectively augment another. But those are topics for another day.

The primary situation I want to explore in this post is the problem of breaking changes, particularly with respect to the diamond dependency problem. I’ve found it helpful to make things very, very concrete when it comes to versioning. So we’ll consider the following situation.

  • A team is building an application called Time Zone Magic. They’re using .NET Core 3.0, and everything they need to use targets .NET Standard 2.0 – so they have no problems there.
  • The team is completely in control of the application, and doesn’t need to worry about any versioning for the application itself. (Just to simplify things…)
  • The application depends on Noda Time, naturally, for all the marvellous ways in which Noda Time can help you with time zones.
  • The application also depends on DarkSkyCore1.

Now DarkSkyCore depends on NodaTime 2.4.7. But the Time Zone Magic application needs to depend on NodaTime 3.0.0 to take advantage of some of the newest functionality. (To clarify, NodaTime 3.0.0 hasn’t actually been released at the time of writing this blog post. This part is concrete but fictional, just like the application itself.) So, we have a diamond dependency problem. It’s entirely possible that DarkSkyCore depends on functionality that’s in NodaTime 2.4.7 but has been removed from 3.0.0. If that’s the case, with the current way .NET works (whether desktop or Core), an exception will occur at some point – exactly how that’s surfaced will probably vary based on a number of factors that I don’t explore in this post.

Currently, as far as I can tell, DarkSkyCore doesn’t refer to any NodaTime types in its public API. We’ll consider what difference this makes in the various options under consideration. I’ll mention a term that I learned during the Twitter conversation: type exchange. I haven’t seen a formal definition of this, but I’m assuming it means one library referring to a type from another library within its public API, e.g. as a parameter or return type, or even as a base class or implemented interface.

The rest of this post consists of some options for what could happen, instead of the current situation. These are just the options I’ve considered; I certainly don’t want to give the impression it’s exhaustive or that we (as a community) should stop trying to think of other options too.

1 I’ve never used this package, and have no opinion on it. It’s just a plausible package to use that depends on NodaTime.

Option 1: Decide to do nothing

It’s always including the status quo as a possible option. We can acknowledge that the current situation has problems (the errors thrown at hard-to-predict places) but we may consider that every alternative is worse, either in terms of end result or cost to implement.

It’s worth bearing in mind that .NET has been around for nearly 20 years, and while this is certainly a known annoyance, I seem to care about it more than most developers I encounter – suggesting that this problem doesn’t make all development on .NET completely infeasible.

I do believe it will hinder the community’s growth in the future though, particularly if (as I hope) the Open Source ecosystem flourishes more and more. I believe one of the reasons this hasn’t bitten the platform very hard so far is that the framework provides so much, and ASP .NET (including Core) dominate on the web framework side of things. In the future, if there are more small, “do one thing well” packages that are popular, the chances of incompatibilities will increase.

Option 2: Never make breaking changes

If we never make breaking changes, there can’t be any incompatibilities. We keep the major version at 1, and it doesn’t matter which minor version anyone depends on.

This has been the approach of the BCL team, very largely (aside from “keeping the major version at 1”) – and is probably appropriate for absolutely “system level” packages. Quite what counts as “system level” is an open question: Noda Time is relatively low level, and attempts to act as a replacement for system types, so does that mean I should never make any breaking changes either?

I could potentially commit to not making any future breaking changes – but deciding to do that right from day 1 would seriously stifle innovation. Releasing version 1.0 is scary enough as it is, without the added pressure of “you own every API mistake in here, forever.” There’s a huge cost involved in the kind of painstaking review of every API element that the BCL team goes through. That’s a cost most open source authors probably can’t bear, and it’s not going to be a good investment of time for 99.9% of libraries… but for the 0.1% that make it and become Json.NET-like in terms of ubiquity, it would be great.

Maybe open source projects should really aim for 2.infinity: version 1.x is to build momentum, and 2.x is forever. Even that leaves me pretty uncomfortable, to be honest.

There’s another wrinkle in this in terms of versioning that may be relevant: platform targeting. One of the reasons I’ve taken a major version bump for NodaTime 3.0 is that I’m dropping support for older versions of .NET. As of NodaTime 3.0, I’m just targeting .NET Standard 2.0. Now that’s a breaking change in that it stops anyone using a platform that doesn’t support .NET Standard 2.0 from taking a dependency on NodaTime 3.0, but it doesn’t have the same compatibility issues as other breaking changes. If the only thing I did for NodaTime 3.0 was to change the target framework, the diamond dependency problem would be a non-issue, I believe: any code that could run 3.0 would be compatible with code expecting 2.x.

Now in Noda Time 3.0 I also removed binary serialization, and I’d be very reluctant not to do that. Should the legacy of binary serialization haunt a library forever? Is there actually some acceptable deprecation period for things like this? I’m not sure.

Without breaking changes, type exchange should always be fine, barring code that relies on bugs in older versions.

Option 3: Put the major version in the package name

The current versioning guidance from Microsoft suggests following SemVer 2.0, but in the breaking changes guidance it states:

CONSIDER publishing a major rewrite of a library as a new NuGet package.

Now, it’s not clear to me what’s considered a “major rewrite”. I implemented a major rewrite of a lot of Noda Time functionality between 1.2 and 1.3, without breaking the API. For 2.0 there was a more significant rewrite, with some breaking changes when we moved to nanosecond precision. It’s worth at least considering the implications of interpreting that as “consider publishing a breaking change as a new NuGet package”. This is effectively putting the version in the package name, e.g. NodaTime1, NodaTime2 etc.

At this point, on a per-package basis, we have no breaking changes, and we’d keep the major version at 1 forever, aside from potentially dropping support for older target platforms, as described in option 2. The differences are:

  • The package names become pretty ugly, in my opinion – something that I’d argue is inherently part of the version number has leaked elsewhere. It’s effectively an admission that .NET and SemVer don’t play nicely together.
  • We don’t see breaking changes in the app example above, because DarkSkyCore would depend on NodaTime2 and the Time Zone Magic application would depend directly on NodaTime3.
  • Global state becomes potentially more problematic: any singleton in both NodaTime2 and NodaTime3 (such as DateTimeZoneProviders.Tzdb for NodaTime) would be a “singleton per package” but not a “global singleton”. With the example of DateTimeZoneProviders.Tzdb, that means different parts of Time Zone Magic could give different results for the same time zone ID, based on whether the data was retrieved via NodaTime2 or NodaTime3. Ouch.
  • Type exchange doesn’t work out of the box: if DarkSkyCore exposed a NodaTime2 type in its API, the Time Zone Magic code wouldn’t be able to take that result and pass it into NodaTime3 code. On the other hand, it would be feasible to create another package, NodaTime2To3 which depended on both NodaTime2 and NodaTime3 and provided conversions where feasible.
  • Having largely-the-same code twice in memory could have performance implications – twice as much JITting etc. This probably isn’t a huge deal in most scenarios, but could be painful in some cases.

No CLR changes are required for this – it’s an option that anyone can adopt right now.

One point that’s interesting to note (well, I think so, anyway!) is that in the Google Cloud Client Libraries we already have a version number in the package name: it’s the version number of the network API that the client library targets. For example, Google.Cloud.Speech.V1 targets the “Speech V1” API. This means there can be a “Speech V2” API with a different NuGet package, and the two packages can be versioned entirely independently. (And you can use both together.) That feels appropriate to me, because it’s part of “the purpose of the package” – whereas the version number of the package itself doesn’t feel right being in the package name.

Option 4: Major version isolation in the CLR

This option is most simply described as “implicit option 3, handled by tooling and the CLR”. (If you haven’t read option 3 yet, please do so now.) Imagine we kept the package name as just NodaTime, but all the tooling involved (MSBuild, NuGet etc) treated “NodaTime v2.x” and “NodaTime v3.x” as independent packages. All the benefits and drawbacks of option 3 would still apply, except the drawback of the version number leaking into the package name.

It’s possible that no CLR changes would be required for this – I don’t know. One of the interesting aspects on the Twitter thread was that AssemblyLoadContext could be used in .NET Core 3 for some of what I’d been describing, but that there were performance implications. Microsoft engineers also reported that what I’d been proposing before would be a huge amount of work and complexity. I have no reason to doubt their estimation here.

My hunch is that if 90% of this could be done in tooling, we should be able to achieve a lot without execution-time penalties. Maybe we’d need to do something like using the major version number as a suffix on the assembly filename, so that NodaTime2.dll and NodaTime3.dll could live side-by-side in the same directory. I could live with that – although I readily acknowledge that it’s a hugely disruptive change. Whatever the implementation, the lack of type exchange would be very disruptive, to the extent that maybe this should be an opt-in (on the part of the package owner) mechanism. “I want more freedom for major version coexistence, at the expense of type exchange.”

Another aspect of feedback in the Twitter thread was that the CLR has supported side-by-side assembly loading for a very long time (forever?) but that customers didn’t use it in practice. Again, I have no reason to dispute the data – but I would say that it’s not evidence that it’s a bad feature. Even great features need to be exposed well before they’ll be used… look at generic variance in the CLR, which was already present in .NET 2.0, but was effectively unused until languages (e.g. C# 4) and the framework (e.g. interfaces such as a IEnumerable) supported it too.

It took a long time to get from “download a zip file, copy the DLLs to a lib directory, and add a reference to that DLL” to “add a reference to a versioned NuGet package which might require its own NuGet dependencies”. I believe many aspects of the versioning story aren’t really exposed in that early xcopy-dependency approach, and so maybe we didn’t take advantage of the CLR facilities nearly as early as we should have don.

If you hadn’t already guessed, this option is the one I’d like to pursue with the most energy. I want to acknowledge that it’s easy for me to write that in a blog post, with none of the cost of fully designing, implementing and supporting such a scheme. Even the exploratory work to determine the full pros and cons, estimate implementation cost etc would be very significant. I’d love the community to help out with this work, while realizing that Microsoft has the most experience and data in this arena.

Option 5: Better error detection

When laying out the example, I noted that for the purposes of DarkSkyCore, NodaTime 2.4.7 and NodaTime 3.0 may be entirely compatible. DarkSkyCore may not need any of the members that have been removed in 3.0. More subtly, even if there are areas of incompatibility, the parts of DarkSkyCore that are accessed by the Time Zone Magic application may not trigger those incompatibilities.

One relatively simple (I believe) first step would be to have a way of determining the first kind of “compatibility despite a major version bump”. I expect that with Mono.Cecil or similar packages, it should be feasible to:

  • List every public member (class, struct, interface, method, property etc) present in NodaTime 3.0, by analyzing NodaTime.dll
  • List every public member from NodaTime 2.4.7 used within DarkSkyCore, by analyzing DarkSkyCore.dll
  • Check whether there’s anything in the second list that’s not in the first. If there isn’t, DarkSkyCore is probably compatible with NodaTime 3.0.0, and Time Zone Magic will be okay.

This ignores reflection of course, along with breaking behavioral changes, but it would at least give a good first indicator. Note that if we’re primarily interested in binary compatibility rather than source compatibility, there are lots of things we can ignore, such as parameter names.

It’s very possible that this tooling already exists, and needs more publicity. Please let me know in comments if so, and I’ll edit a link in here. If it doesn’t already exist, I’ll prototype it some time soon.

If we had such a tool, and it could be made to work reliably (if conservatively), do we want to put that into our normal build procedure? What would configuration look like?

I’m a firm believer that we need a lot more tooling around versioning in general. I recently added a version compatibility detector written by a colleague into our CI scripts, and it’s been wonderful. That’s a relatively “home-grown” project (it lives in the Google Cloud client libraries repository) but something similar could certainly become a first class citizen in the .NET ecosystem.

In my previous blog post, I mentioned the idea of “private dependencies”, and I’d still like to see tooling around this, too. It doesn’t need any CLR or even NuGet support to be useful. If the DarkSkyCore authors could say “I want to depend on NodaTime, but I want to be warned if I ever expose any NodaTime types in my public API” I think that would be tremendously useful as a starting point. Again, it shouldn’t be hard to at least prototype.

Conclusion

As I mentioned at the start, corrections and alternative viewpoints are very welcome in comments, and I’ll assume (unless you say otherwise) that you’re happy for me to edit them into the main post in some form or other (depending on the feedback).

I want to encourage a vigorous but positive discussion about versioning in .NET. Currently I feel slightly impotent in terms of not knowing how to proceed beyond blogging and engaging on Twitter, although I’m hopeful that the .NET Foundation can have a significant role in helping with this. Suggestions for next steps are very welcome as well as technical suggestions.

Why I don’t start versions at 0.x any more

(I’m writing this post primarily so I can link to it in an internal document on Monday. There’s nothing sensitive or confidential here, so I might as well get it down in a blog post.)

SemVer is pretty clear about pre-releases. Any version with a major version of 0 is considered “initial development”, and anything can change at any time. Pre-releases – versions which have a hyphen after the regular version number – are also considered unstable.

In any project, I used to use 0.x to start with and then progress to 1.0.0-alpha01 or similar1 at some point. I’ve stopped doing this now.

For any project I start now, the first release will be 1.0.0-alpha01 or 1.0.0-beta01. The reason? Consistency. With this scheme, never releasing anything with starting with “0.” there’s a very consistent story about what the pre-releases for any given version are: they’re that version, with a hyphen after it. So for example:

  • The pre-releases for 1.0.0 are 1.0.0-alpha01, 1.0.0-alpha02, 1.0.0-beta01 etc
  • The pre-releases for 1.1.0 are 1.1.0-alpha01, 1.1.0-alpha02, 1.1.0-beta01 etc
  • The pre-releases for 2.0.0 are 2.0.0-alpha01, 2.0.0-alpha02, 2.0.0-beta01 etc

All very consistent. Whereas if you use a major version of 0 as well, just version 1.0.0 is treated specially. It gets pre-releases of 0.1, 0.2, 1.0.0-alpha01, 1.0.0-alpha02, 1.0.0-beta01 etc. I’m fine with things being inconsistent when there’s a good reason for it, but I don’t see any benefit here.

While you might argue the case for a difference between “initial development” and “first alpha release” I suspect that’s almost never really useful. It’s hard enough working out exactly when to move from alpha to beta (and documenting the reasons for that decision), without having a “pre-alpha” stage to consider.

This isn’t something I feel strongly enough to put effort into persuading the world – I’m not on an anti-0.x crusade – but if this post happens to have that effect, I’m not going to complain :)


1 SemVer would actually suggest using 1.0.0-alpha.1 instead of 1.0.0-alpha01. Dot-separated identifiers are compared for precedence, and identifiers which are only numeric are compared numerically. So 1.0.0-alpha.11 comes after 1.0.0-alpha.2, which is good. However, using 1.0.0-alpha02 and 1.0.0-alpha11 gives the same effect, without having to worry about anything that uses lexicographic ordering. There’s still a problem when you reach version 1.11.0 version 1.2.0 of course. My point is that this post documents what I currently do, but you may well wish to have a different flavour of prerelease.

Using “git bash” from AppVeyor

Update: I don’t know whether it was partially due to this blog post or not, but AppVeyor has fixed things so that you don’t (currently, 20th October 2019) need to use the fix in this post. You may want to include it anyway, for the sake of future-proofing.


TL;DR: If your AppVeyor build starts breaking because it’s started using WSL bash, change the path in your YAML file – see the end of the post for an example.

For years now, I’ve used bash scripts for all kinds of automation in Windows projects. The version of bash I use is the one that comes with Git for Windows – I believe its origins include Cygwin, MSYS2, and MinGW-w64. (I don’t know enough about the differences between those projects or which builds on which to say more. Fortunately, I don’t need to.) This version of bash is installed by default AppVeyor, the CI system I typically use for Windows builds, so I don’t need to do anything else.

Importantly, I don’t want to use Windows Subsystem for Linux (WSL) on Windows builds. The point of doing the build is to use the Windows tool chains. I use Travis for doing Linux builds.

On October 11th 2019, my Noda Time AppVeyor build failed with this error:

build/appveyor.sh: line 11: dotnet: command not found

It turns out this is because AppVeyor has started shipping WSL with its Visual Studio 2019 images. The bash from WSL is earlier in the path than the version of bash from git, so that one is used, and everything starts failing.

It took a little while to diagnose this, but the fix is pretty easy – you just need to put git bash earlier in your path. I chose to do this in the “install” part of appveyor.yml:

install:
  # Make sure we get the bash that comes with git, not WSL bash
  - ps: $env:Path = "C:\Program Files\Git\bin;$env:Path"

Using just that change, the build started working again. Hooray!

Versioning limitations in .NET

This is a blog post I’ve intended to write for a very long time. (Other blog posts in that category include a recipe for tiramisu ice cream, and “knights and allies”.) It’s one of those things that’s grown in my mind over time, becoming harder and harder to start. However, there have been three recent incidents that have brought it back into focus:

TL;DR: Versioning is inherently hard, but the way that .NET infrastructure is set up makes it harder than it needs to be, I suspect.

The sample code for this blog post is available on GitHub.

Refresher: SemVer

NuGet is the de facto standard for distribution of packages now, and it supports semantic versioning, also known as SemVer for short. SemVer version strings (ignoring pre-release versions) are of the form major.minor.patch.

The rules of SemVer sound straightforward from the perspective of a package producer:

  • If you make a breaking change, you need to bump the major version
  • If you make backward compatible additions, you need to bump the minor version
  • If you make backward and forward compatible changes (basically internal implementation changes or documentation changes) you bump the patch version

It also sounds straightforward from the perspective of a package consumer, considering moving from one version to another of a package:

  • If you move to a different major version, your existing code may not work (because everything can change between major versions)
  • If you move to a later minor version within the same major version, your code should still work
  • If you move to an earlier minor version within the same major version, your existing code may not work (because you may be using something that was introduced in the latest minor version)
  • If you move to a later or earlier patch version within the same major/minor version, your code should still work

Things aren’t quite as clear as they sound though. What counts as a breaking change? What kind of bug fix can go into just a patch version? If a change can be detected, it can break someone, in theory at least.

The .NET Core team has a set of rules about what’s considered breaking or not. That set of rules may not be appropriate for every project. I’d love to see:

  • Tooling to tell you what kind of changes you’ve made between two commits
  • A standard format for rules so that the tool from the first bullet can then suggest what your next version number should be; your project can then advertise that it’s following those rules
  • A standard format to record the kinds of changes made between versions
  • Tooling to check for “probable compatibility” of the new version of a library you’re consuming, given your codebase and the record of changes

With all that in place, we would all hopefully be able to follow SemVer reliably.

Importantly, this makes the version number a purely technical decision, not a marketing one. If the current version of your package is (say) 2.3.0, and you add a bunch of features in a backward-compatible way, you should release the new version as 2.4.0, even if it’s a “major” version in terms of the work you’ve put in. Use whatever other means you have to communicate marketing messages: keep the version number technical.

Even with packages that follow SemVer predictably and clearly, that’s not enough for peace and harmony in the .NET ecosystem, unfortunately.

The diamond dependency problem

The diamond dependency problem is not new to .NET, and most of the time we manage to ignore it – but it’s still real, and is likely to become more of an issue over time.

The canonical example of a diamond dependency is where an application depends on two libraries, each of which depends on a common third library, like this:

Common diamond dependency

(I’m using NodaTime as an example so I can refer to specific versions in a moment.)

It doesn’t actually need to be this complicated – we don’t need Lib2 here. All we need is two dependencies on the same library, and one of those can be from the application:

Simplified diamond dependency

Multiple dependencies on the same library are fine, so long as they depend on compatible versions. For example, from our discussion of SemVer above, it should be fine for Lib1 to depend on NodaTime 1.3.0, and App to depend on NodaTime 1.2.0. We expect the tooling to resolve all the dependencies and determine that 1.3.0 is the version to use, and the App code should be fine with that – after all, 1.3.0 is meant to be backward-compatible with 1.2.0. The same is true the other way round, if App depends on later version than Lib1, so long as they’re using the same major version.

(Note: there are potential problems even within a minor version number – if App depends on 1.3.0 and Lib1 depends on 1.3.1 which contains a bug fix, but App has a workaround for the bug which then fails under 1.3.1 when the bug is no longer present. Things like that can definitely happen, but I’ll ignore that kind of problem for the rest of this post, and assume that everything conforms to idealized SemVer.)

Diamond dependencies become a problem under SemVer when the dependencies are two different major versions of the same library. To give a concrete example from the NodaTime package, consider the IClock interface. The 1.4.x version contains a single property, Now. The 2.0.x version has the same functionality, but as a method, GetCurrentInstant(). (This was basically a design failing on my part in v1 – I followed the BCL example of DateTime.Now without thinking clearly enough about whether it should have been a property.)

Now suppose App is built with the .NET Core SDK, and depends on NodaTime 2.0.0, and Lib1 depends on NodaTime 1.3.1 – and let’s imagine a world where that was the only breaking change in NodaTime 2.x. (It wasn’t.) When we build the application, we’d expect 2.0 to be used at execution time. If Lib1 never calls IClock.Now, all is well. Under .NET Core tooling, assembly binding redirects are handled automatically so when Lib1 “requests” NodaTime 1.3.1, it gets NodaTime 2.0.0. (The precise way in which this is done depends on the runtime executing the application. In .NET Core, there’s an App.deps.json file; in desktop .NET it’s App.exe.config. Fortunately this doesn’t matter much at the level of this blog post. It may well make a big difference to what’s viable in the future though.)

If Lib1 does call IClock.Now, the runtime will throw a MissingMethodException. Ouch. (Sample code.)

The upshot is that if the transitive set of “package + version” tuples for your entire application contains more than one major version for the same package, it’s entirely possible that you’ll get an exception at execution time such as MissingMethodException, MissingFieldException, TypeNotFoundException or similar.

If that doesn’t sound too likely, please consider that the Newtonsoft.Json package (Json .NET) has 12 major versions as I’m writing this blog post. I suspect that James Newton-King has kept the breaking changes to an absolute minimum, but even so, it’s pretty scary.

Non-proposals

I’d like to propose some enhancements to tooling that might help to address the issue. Before we look at what I am suggesting, I’d like to mention a few options that I’m not suggesting.

Ignore the problem

I’m surprised that few people seem as worried about versioning as I am. I’ve presented talks on versioning a couple of times, but I don’t remember seeing anyone else do so – and certainly not in a .NET-specific deep-dive way. (My talk isn’t that, either.) It’s possible that there are lots of people who are worried, and they’re just being quiet about it.

This blog post is just part of me trying to agitate the community – including but not limited to Microsoft – into taking this problem seriously. If it turns out that there are already smart people working on this, that’s great. It’s also possible that we can live on the edge of versioning anarchy forever and it will always be a potential nightmare, but only cause a small enough number of failures that we decide we can live with it. That feels like a risk we should at least take consciously though.

Build at head, globally

In 2017, Titus Winters presented C++ as a live at head language at CppCon. It’s a great talk; go watch it. (Yes, it’s an hour and a half long. It’s still worth it. It also states a bunch of what I’ve said above in a slightly different way, so it may be helpful in that sense.) The idea is for everyone to build their application based on full source code, and provide tooling to automatically update consumer code based on library changes.

To go back to the Noda Time IClock example, if I build all the code for my application locally (App, Lib1 and NodaTime) then when NodaTime changes from the IClock.Now property to IClock.GetCurrentInstant(), the code in Lib1 that uses IClock.Now can automatically be changed to use IClock.GetCurrentInstant(), and everyone is happy with the same version. The Abseil project is a library (or collection of libraries) for C++ that embrace this concept.

It’s possible that this could eventually be a good solution for .NET. I don’t know of any technical aspects that mean it could work for C++ but not for .NET. However, it’s so far from our current position that I don’t believe it’s a practical choice at the moment, and I think it makes sense to try this experiment in one language first for a few years, then let other languages consider whether it makes sense for them.

I want to make it very clear that I’m not disagreeing with anything Titus said. He’s a very smart guy, and I explicitly do agree with almost everything I’ve heard him say. If I ever decide that I disagree with some aspect and want to make a public debate about it, I’ll be a lot more specific. Vague arguments are irritating for everyone. But the .NET ecosystem does depend on binary distribution of packages at the moment, and that’s an environment Titus deliberately doesn’t try to address. If someone wants to think about all the practical implications of all the world’s .NET consumers living at head in a source-driven (rather than binary-driven) world, I’d be interested in reading the results of that thinking. It’s certainly more feasible now than it was before .NET Core. But I’m not going there right now.

Never make breaking changes in any library

If we never make any changes that will break anyone, none of this is a problem.

I gave the example of Newtonsoft.Json earlier, and that it’s on major version 12. My guess is that that means there really have been 11 sets of breaking changes, but that they’re sufficiently unlikely to cause real failure that we’ve survived.

In the NodaTime package, I know I have made real breaking changes – it’s currently at version 2.4.x, and I’m planning on a 3.0 release some time after C# 8 comes out. I’ve made (or I’m considering) breaking changes in at least three different ways:

  • Adding members to public interfaces. If you implement those interfaces yourself (which is relatively unlikely) your code will be broken. On the other hand, everyone who wants the functionality I’ve added gets to use it in a clean way.
  • Removing functionality which is either no longer desirable (binary serialization) or shouldn’t have been present to start with. If you still want that functionality, I can only recommend that you stay on old versions.
  • Refactoring existing functionality, e.g. the IClock.Now => IClock.GetCurrentInstant() change, or fixing a typo in a method name. It’s annoying for existing consumers, but better for future consumers.

I want to be able to make all of these changes. They’re all good things in the long run, I believe.

So, those are options I don’t want to take. Let’s look at a few that I think we should pursue.

Proposals

Firstly, well done and thank you for making it this far. Before any editing, we’re about 2000 words into the post at this point. A smarter person might have got this far quicker without any loss of important information, but I hope the background has been useful.

Prerequisite: multi-version support

My proposals require that the runtime support loading multiple assemblies with the same name at the same time. Obviously I want to support .NET Core, so this mustn’t require the use of multiple AppDomains. As far as I’m aware, this is already the case, and I have a small demo of this, running with both net471 and netcoreapp2.0 targets:

// Call SystemClock.Instance.Now in NodaTime 1.3.1
string path131 = Path.GetFullPath("NodaTime-1.3.1.dll");
Assembly nodaTime131 = Assembly.LoadFile(path131);
dynamic clock131 = nodaTime131
    .GetType("NodaTime.SystemClock")
    // Instance is a field 1.x
    .GetField("Instance")
    .GetValue(null);
Console.WriteLine(clock131.Now);

// Call SystemClock.Instance.GetCurrentInstant() in NodaTime 2.0.0
string path200 = Path.GetFullPath("NodaTime-2.0.0.dll");
Assembly nodaTime200 = Assembly.LoadFile(path200);
dynamic clock200 = nodaTime200
    .GetType("NodaTime.SystemClock")
    // Instance is a property in 2.x
    .GetProperty("Instance")
    .GetValue(null);
Console.WriteLine(clock200.GetCurrentInstant());

I’ve used dynamic typing here to avoid having to call the Now property or GetCurrentInstant() method using hand-written reflection, but we have to obtain the clock with reflection as it’s accessed via a static member. This is in a project that doesn’t depend on Noda Time at all in a compile-time sense. It’s possible that introducing a compile-time dependency could lead to some interesting problems, but I suspect those are fixable with the rest of the work below.

On brief inspection, it looks like it’s also possible to load two independent copies of the same version of the same assembly, so long as they’re stored in different files. That may be important later on, as we’ll see.

Proposal: execute with the expected major version

The first part of my proposal underlies all the rest. We should ensure that each library ends up executing against a dependency version that has the same major version it requested. If Lib1 depends on Noda Time 1.3.1, tooling should make sure it always gets >= 1.3.1 and = 1.3.1″ which appears to be the default at the moment, but I don’t mind too much if I have to be explicit. The main point is that when different dependencies require different major versions, the result needs to be multiple assemblies present at execution time, rather than either a build error or the approach of “let’s just hope that Lib1 doesn’t use anything removed in 2.0”. (Of course, Lib1 should be able to declare that it is compatible with both NodaTime 1.x and NodaTime 2.x. It would be good to make that ease to validate, too.)

If the rest of the application already depends on NodaTime 1.4.0 (for example) then it should be fine to stick to the simple situation of loading a single copy of the NodaTime assembly. But if the rest of the application is using 2.0.0 but Lib1 depends on 1.3.1, we should make that work by loading both major versions 1 and 2.

This proposal then leads to other problems in terms of how libraries communicate with each other; the remaining proposals attempt to address that.

Proposal: private dependencies

When describing the diamond dependency problem, there’s one aspect I didn’t go into. Sometimes a library will take a dependency as a pure implementation detail. For example, Lib1 could use NodaTime internally, but expose an API that’s purely in terms of DateTime. On the other hand, Lib1 could expose its use of NodaTime via its public (and protected) API, using NodaTime types for some properties, method parameters, method return types, generic type arguments, base types and so on.

Both scenarios are entirely reasonable, but they have different versioning concerns. If Lib1 uses NodaTime as a “private dependency” then App shouldn’t (in an ideal world) need to care which version of NodaTime Lib1 uses.

However, if Lib1 exposes method with an IClock parameter, the method caller really needs to know that it’s using a 1.3.1. They’ll need to have a “1.3.1 IClock” to pass in. That means App needs to be aware of the version of NodaTime that Lib1 depends on.

I propose that the author of Lib1 should be able to make a decision about whether NodaTime is a “public” or “private” dependency, and express that decision within the NuGet package.

The compiler should be able to validate that a private dependency really isn’t exposed in the public API anywhere. Ideally, I’d like this to be part of the C# language eventually; I think versioning is important enough to be a language concern. It’s reasonable to assert that that ship has sailed, however, and that it’s reasonable to just have a Roslyn analyzer for this. Careful thought is required in terms of transitive dependencies, by the way. How should the compiler/analyzer treat a situation where Lib1 privately depends on NodaTime 1.3.1, but publicly depends on Lib2 that publicly depends on NodaTime 2.0.0? I confess I haven’t thought this through in detail; I first want to get enough people interested that the detailed work is worth doing.

Extern aliases for packages

Private dependencies are relatively simple to think about, I believe. They’re implementation details that should – modulo a bunch of caveats – not impact consumers of the library that has the private dependencies.

Public dependencies are trickier. If App wants to use NodaTime 2.0.0 for almost everything, but needs to pass in a 1.3.1 clock to a method in Lib1, then App effectively needs to depend on both 1.3.1 and 2.0.0. Currently, as far as I’m aware, there’s no way of representing this in a project file. C# as a language supports the idea of multiple assemblies exposing the same types, via extern aliases… but we’re missing a way of expressing that in project files.

There’s already a GitHub issue requesting this, so I know I’m not alone in wanting it. We might have something like:

<ProjectReference Include="NodaTime" Version="1.3.1" ExternAlias="noda1" />
<ProjectReference Include="NodaTime" Version="2.0.0" ExternAlias="noda2" />

then in the C# code you might use:

using noda2::NodaTime;
// Use NodaTime types as normal, using NodaTime 2.0.0

// Then pass a 1.3.1 clock into a Lib1 method:
TypeFromLib1.Method(noda1::NodaTime.SystemClock.Instance);

There’s an assumption here: that each package contains a single assembly. That definitely doesn’t have to be true, and a full solution would probably need to address that, allowing more complex syntax for per-assembly aliasing.

It’s worth noting that it would be feasible for library authors to providing “bridging” packages too. For example, I could provide a NodaTime.Bridging package which allowed you to convert between NodaTime 1.x and NodaTime 2.x types. Sometimes those conversions may be lossy, but they’re at least feasible. The visible immutability of almost every type in Noda Time is a big help here, admittedly – but packages like this could really help consumers.

Here be dragons: shared state

So far I’ve thought of two significant problems with the above proposals, and both involve shared state – but in opposite directions.

Firstly, consider singletons that we really want to be singletons. SystemClock.Instance is a singleton in Noda Time. But if multiple assemblies are loaded, one per major version, then it’s really “singleton per major version.” For SystemClock that’s fine, but imagine if your library decided that it would use grab a process-wide resource in its singleton, assuming that it was okay to do so because after all there’s only be one of them. Maybe you’d have an ID generator which would guarantee uniqueness by incrementing a counter. That doesn’t work if there are multiple instances.

Secondly, we need to consider mutable shared state, such as some sort of service locator that code registered implementations in. Two different libraries with supposedly private dependencies on the same service locator package might each want to register the same type in the service locator. At that point, things work fine if they depend on different major versions of the service locator package, but start to conflict if the implementations happen to depend on the same major version, and end up using the same assembly. Our isolation of the private dependency isn’t very isolated after all.

While it’s reasonable to argue that we should avoid this sort of shared state as far as possible, it’s unreasonable to assume that it doesn’t exist, or that it shouldn’t be considered as part of this kind of versioning proposal. At the very least, we need to consider how users can diagnose issues stemming from this with some ease, even if I suspect it’ll always be slightly tricky.

As noted earlier, it’s possible to introduce more isolation by loading the same assembly multiple times, so potentially each private dependency could really be private. That helps in the second case above, but hurts more in the first case. It also has a performance impact in terms of duplication of code etc.

Here be unknown dragons

I’m aware that versioning is really complicated. I’ve probably thought about it more than most developers, but I know there’s a lot I’m unaware of. I don’t expect my proposals to be “ready to go” without serious amounts of detailed analysis and work. While I would like to help with that work, I suspect it will mostly be done by others.

I suspect that even this detailed analysis won’t be enough to get things right – I’d expect that when there’s a prototype, exposing it to real world dependencies will find a bunch more issues.

Conclusion

I believe the .NET ecosystem has a versioning problem that’s currently not being recognized and addressed.

The intention isn’t that these proposals are final, concrete design docs – the intention is that they help either start the ball rolling, or give an already-rolling-slightly ball a little more momentum. I want the community to openly discuss the problems we’re seeing now, so we get a better handle on the problem, and then work together to alleviate those problems as best we can, while recognizing that perfection is unlikely to be possible.

Lying to the compiler

This morning I tweeted this:

Just found a C# 8 nullable reference types warning in Noda Time. Fixing it by changing Foo(x, x?.Bar) to Foo(x, x?.Bar!) which looks really dodgy… anyone want to guess why it’s okay?

This attracted more interest than I expected, so I thought I’d blog about it.

First let’s unpack what x?.Bar! means. x?.Bar means “if x is non-null, the value of x.Bar; otherwise, the corresponding null value”. The ! operator at the end is introduced in C# 8, and it’s the damn-it operator (more properly the “null-forgiving operator”, but I expect to keep calling it the damn-it operator forever). It tells the compiler to treat the preceding expression as “definitely not null” even if the compiler isn’t sure for whatever reason. Importantly, this does not emit a null check in the IL – it’s a compile-time only change.

When talking about the damn-it operator, I’ve normally given two scenarios where it makes sense:

  • When testing argument validation
  • When you have invariants in your code which allow you to know more than the compiler does about nullability. This is a little bit like a cast: you’re saying you know more than the compiler. Remember that it’s not like a cast in terms of behaviour though; it’s not checked at execution time.

My tweet this morning wasn’t about either of these cases. It’s in production code, and I absolutely believe that it’s possible for x?.Bar to be null. I’m lying to the compiler to get it to stop it emitting a warning. The reason is that in the case where the value is null, it won’t matter that it’s null.

The actual code is in this Noda Time commit, but the code below provides a simplified version. We have three classes:

  • Person, with a name and home address
  • Address, with some properties I haven’t bothered showing here
  • Delivery, with a recipient and an address to deliver to
using System;

public sealed class Address
{
    // Properties here
}

public sealed class Person
{
    public string Name { get; }
    public Address HomeAddress { get; }

    public Person(string name, Address homeAddress)
    {
        Name = name ??
            throw new ArgumentNullException(nameof(name));
        HomeAddress = homeAddress ??
            throw new ArgumentNullException(nameof(homeAddress));
    }
}

public sealed class Delivery
{
    public Person Recipient { get; }
    public Address Address { get; }

    public Delivery(Person recipient)
        : this(recipient, recipient?.HomeAddress!)
    {
    }

    public Delivery(Person recipient, Address address)
    {
        Recipient = recipient ??
            throw new ArgumentNullException(nameof(recipient));
        Address = address ??
            throw new ArgumentNullException(nameof(address));
    }
}

The interesting part is the Delivery(Person) constructor, that delegates to the Delivery(Person, Address) constructor.

Here’s a version that would compile with no warnings:

public Delivery(Person recipient)
    : this(recipient, recipient.HomeAddress)

However, now if recipient is null, that will throw NullReferenceException instead of the (preferred) ArgumentNullException. Remember that nullable reference checking in C# 8 is really just advisory – the compiler does nothing to stop a non-nullable reference variable from actually having a value of null. This means we need to keep all the argument validation we’ve already got.

We could validate recipient before we pass it on to the other constructor:

public Delivery(Person recipient)
    : this(recipient ?? throw new ArgumentNullException(...),
           recipient.HomeAddress)

That will throw the right exception, but it’s ugly and more code than we need. We know that the constructor we’re delegating to already validates recipient – we just need to get that far. That’s where the null-conditional operator comes in. So we can write:

public Delivery(Person recipient)
    : this(recipient, recipient?.HomeAddress)

That will behave as we want it to – if recipient is null, we’ll pass null values as both arguments to the other constructor, and it will validate them. But now the compiler warns that the second argument could be null, and the parameter is meant to be non-null. The solution is to use the damn-it operator:

public Delivery(Person recipient)
    : this(recipient, recipient?.HomeAddress!)

Now we get the behaviour we want, with no redundant code, and no warnings. We’re lying to the compiler and satisfied that we’re doing so sensibly, because recipient?.HomeAddress is only null if recipient is null, and we know that that will be validated first anyway.

I’ve added a comment, as it’s pretty obscure otherwise – but part of me just enjoys the oddity of it all :)

Storing UTC is not a silver bullet

Note: this is a pretty long post. If you’re not interested in the details, the conclusion at the bottom is intended to be read in a standalone fashion. There’s also a related blog post by Lau Taarnskov – if you find this one difficult to read for whatever reason, maybe give that a try.

When I read Stack Overflow questions involving time zones, there’s almost always someone giving the advice to only ever store UTC. Convert to UTC as soon as you can, and convert back to a target time zone as late as you can, for display purposes, and you’ll never have a time zone issue again, they say.

This blog post is intended to provide a counterpoint to that advice. I’m certainly not saying storing UTC is always the wrong thing to do, but it’s not always the right thing to do either.

Note on simplifications: this blog post does not go into supporting non-Gregorian calendar systems, or leap seconds. Hopefully developers writing applications which need to support either of those are already aware of their requirements.

Background: EU time zone rule changes

The timing of this blog post is due to recent European Parliament proceedings that look like they will probably end the clocks changing twice a year into “summer time” or “winter time” within EU member states. The precise details are yet to be finalized and are unimportant to the bigger point, but for the purpose of this blog post I’ll assume that each member state has to decide whether they will “spring forward” one last time on March 28th 2021, then staying in permanent “summer time”, or “fall back” one last time on October 31st 2021, then staying in permanent “winter time”. So from November 1st 2021 onwards, the UTC offset of each country will be fixed – but there may be countries which currently always have the same offset as each other, and will have different offsets from some point in 2021. (For example, France could use winter time and Germany could use summer time.)

The larger point is that time zone rules change, and that applications should expect that they will change. This isn’t a corner case, it’s the normal way things work. There are usually multiple sets of rule changes (as released by IANA) each year. At least in the European changes, we’re likely to have a long notice period. That often isn’t the case – sometimes we don’t find out about rule changes until a few days before they happen.

Application example

For the sake of making everything concrete, I’m going to imagine that we’re writing an application to help conference organizers. A conference organizer can create a conference within the application, specifying when and where it’s happening, and (amongst other things) the application will display a countdown timer of “the number of hours left before the start of the conference”. Obviously a real application would have a lot more going on than this, but that’s enough to examine the implementation options available.

To get even more concrete, we’ll assume that a conference organizer has registered a conference called “KindConf” and has said that it will start at 9am in Amsterdam, on July 10th 2022. They perform this registration on March 27th 2019, when the most recently published IANA time zone database is 2019a, which predicts that the offset observed in Amsterdam on July 10th 2022 will be UTC+2.

For the sake of this example, we’ll assume that the Netherlands decides to fall back on October 31st 2021 for one final time, leaving them on a permanent offset of UTC+1. Just to complete the picture, we’ll assume that this decision is taken on February 1st 2020, and that IANA publishes the changes on March 14th 2020, as part of release 2020c.

So, what can the application developer do? In all the options below, I have not gone into details of the database support for different date/time types. This is important, of course, but probably deserves a separate blog post in its own right, on a per-database basis. I’ll just assume we can represent the information we want to represent, somehow.

Interlude: requirements

Before we get to the implementations, I’ll just mention a topic that’s been brought up a few times in the comments and on Twitter. I’ve been assuming that the conference does still occur at 9am on July 10th 2022… in other words, that the “instant in time at which the conference starts” changes when the rules change.

It’s unlikely that this would ever show up in a requirements document. I don’t remember ever being in a meeting with a product manager where they’d done this type of contingency planning. If you’re lucky, someone would work out that there’s going to be a problem long before the rules actually change. At that point, you’d need to go through the requirements and do the implementation work. I’d argue that this isn’t a new requirement – it’s a sort of latent, undiscovered requirement you’ve always had, but you hadn’t known about before.

Now, back to the options…

Option 1: convert to UTC and just use that forever

The schema for the Conferences table in the database might look like this:

  • ID: auto-incremented integer
  • Name: string
  • Start: date/time in UTC
  • Address: string

The entry for KindConf would look like this:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T07:00:00Z
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands

That entry is then preserved forever, without change. So what happens to our countdown timer?

Result

The good news is that anyone observing the timer will see it smoothly count down towards 0, with no jumps. The bad news is that when it reaches 0, the conference won’t actually start – there’ll be another hour left. This is not good.

Option 2: convert to UTC immediately, but reconvert after rule changes

The schema for the Conferences table would preserve the time zone ID. (I’m using the IANA ID for simplicity, but it could be the Windows system time zone ID, if absolutely necessary.) Alternatively, the time zone ID could be derived each time it’s required – more on that later.

  • ID: auto-incremented integer
  • Name: string
  • Start: date/time in UTC
  • Address: string
  • Time zone ID: string

The initial entry for KindConf would look like this:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T07:00:00Z
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam

On March 14th 2020, when the new time zone database is released, that entry could be changed to make the start time accurate again:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T08:00:00Z
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam

But what does that “change” procedure look like? We need to convert the UTC value back to the local time, and then convert back to UTC using different rules. So which rules were in force when that entry was created? It looks like we actually need an extra field in the schema somewhere: TimeZoneRulesVersion. This could potentially be a database-wide value, although that’s only going to be reasonable if you can update all entries and that value atomically. Allowing a value per entry (even if you usually expect all entries to be updated at roughly the same time) is likely to make things simpler.

So our original entry was actually:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T07:00:00Z
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • TimeZoneRules: 2019a

And the modified entry is:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T08:00:00Z
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • TimeZoneRules: 2020c

Of course, the entry could have been updated many times over the course of time, for 2019b, 2019c, …, 2020a, 2020b. Or maybe we only actually update the entry if the start time changes. Either way works.

Result

Now, anyone refreshing the countdown timer for the event will see the counter increase by an hour when the entry is updated. That may look a little odd – but it means that when the countdown timer reaches 0, the conference is ready to start. I’m assuming this is the desired behaviour.

Implementation

Let’s look at roughly what would be needed to perform this update in C# code. I’ll assume the use of Noda Time to start with, but then we’ll consider what happens if you’re not using Noda Time.

public class Conference
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public Instant Start { get; set; }
    public string TimeZoneId { get; set; }
    public string TimeZoneRules { get; set; }
}

// In other code... some parameters might be fields in the class.
public void UpdateStartTime(
    Conference conference,
    Dictionary<string, IDateTimeZoneProvider> timeZoneProvidersByVersion,
    string latestRules)
{
    // Map the start instant into the time zone using the old rules
    IDateTimeZoneProvider oldProvider = timeZoneProvidersByVersion[conference.TimeZoneRules];
    DateTimeZone oldZone = oldProvider[conference.TimeZoneId];
    ZonedDateTime oldZonedStart = conference.Start.InZone(oldZone);   

    IDateTimeZoneProvider newProvider = timeZoneProvidersByVersion[latestRules];
    DateTimeZone newZone = newProvider[conference.TimeZoneId];
    // Preserve the local time, but with the new time zone rules
    ZonedDateTime newZonedStart = oldZonedStart.LocalDateTime.InZoneLeniently(newZone);

    // Update the conference entry with the new information
    conference.Start = newZonedStart.ToInstant();
    conference.TimeZoneRules = latestRules;
}

The InZoneLeniently call is going to be a common issue – we’ll look at that later (“Ambiguous and skipped times”).

This code would work, and Noda Time would make it reasonably straightforward to build that dictionary of time zone providers, as we publish all the “NZD files” we’ve ever created from 2013 onwards on the project web site. If the code is being updated with the latest stable version of the NodaTime NuGet package, the latestRules parameter wouldn’t be required – DateTimeZoneProviders.Tzdb could be used instead. (And IDateTimeZoneProvider.VersionId could obtain the current version.)

However, this approach has three important requirements:

  • The concept of “version of time zone rules” has to be available to you
  • You have to be able to load a specific version of the time zone rules
  • You have to be able to use multiple versions of the time zone rules in the same application

If you’re using C# but relying on TimeZoneInfo then… good luck with any of those three. (It’s no doubt feasible, but far from simple out of the box, and it may require an external service providing historical data.)

I can’t easily comment on other platforms in any useful way, but I suspect that dealing with multiple versions of time zone data is not something that most developers come across.

Option 3: preserve local time, using UTC as derived data to be recomputed

Spoiler alert: this is my preferred option.

In this approach, the information that the conference organizer supplied (“9am on July 10th 2022”) is preserved and never changed. There is additional information in the entry that is changed when the time zone database is updated: the converted UTC instant. We can also preserve the version of the time zone rules used for that computation, as a way of allowing the process of updating entries to be restarted after a failure without starting from scratch, but it’s not strictly required. (It’s also probably useful as diagnostic information, too.)

The UTC instant is only stored at all for convenience. Having a UTC representation makes it easier to provide total orderings of when things happen, and also to compute the time between “right now” and the given instant, for the countdown timer. Unless it’s actually useful to you, you could easily omit it entirely. (My Noda Time benchmarks suggest it’s unlikely that doing the conversion on every request wouldn’t cause a bottleneck. A single local-to-UTC conversion on my not-terribly-fast benchmark machine only takes ~150ns. In most environments that’s close to noise. But for cases where it’s relevant, it’s fine to store the UTC as described below.)

So the schema would have:

  • ID: auto-incremented integer
  • Name: string
  • Local start: date/time in the specified time zone
  • Address: string
  • Time zone ID: string
  • UTC start: derived field for convenience
  • Time zone rules version: for optimization purposes

So our original entry is:

  • ID: 1
  • Name: KindConf
  • LocalStart: 2022-07-10T09:00:00
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • UtcStart: 2022-07-10T07:00:00Z
  • TimeZoneRules: 2019a

On March 14th 2020, when the time zone database 2020c is released, this is modified to:

  • ID: 1
  • Name: KindConf
  • LocalStart: 2022-07-10T09:00:00
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • UtcStart: 2022-07-10T08:00:00Z
  • TimeZoneRules: 2020c

Result

This is the same as option 2: after the update, there’s a jump of an hour, but when it reaches 0, the conference starts.

Implementation

This time, we don’t need to convert our old UTC value back to a local value: the “old” time zone rules version and “old” UTC start time are irrelevant. That simplifies matter significantly:

public class Conference
{
    public int Id { get; set; }
    public string Name { get; set; }
    public string Address { get; set; }
    public LocalDateTime LocalStart { get; set; }
    public string TimeZoneId { get; set; }
    public Instant UtcStart { get; set; }
    public string TimeZoneRules { get; set; }
}

// In other code... some parameters might be fields in the class.
public void UpdateUtcStart(
    Conference conference,
    IDateTimeZoneProvider latestZoneProvider)
{
    DateTimeZone newZone = latestZoneProvider[conference.TimeZoneId];
    // Preserve the local time, but with the new time zone rules
    ZonedDateTime newZonedStart = conference.LocalStart.InZoneLeniently(newZone);

    // Update the conference entry with the new information
    conference.UtcStart = newZonedStart.ToInstant();
    conference.TimeZoneRules = latestZoneProvider.VersionId;
}

As the time zone rules version is now optional, this code could be ported to use TimeZoneInfo instead. Obviously from my biased perspective the code wouldn’t be as pleasant, but it would be at least reasonable. The same is probably true on other platforms.

So I prefer option 3, but is it really so different from option 2? We’re still storing the UTC value, right? That’s true, but I believe the difference is important because the UTC value is an optimization, effectively.

Principle of preserving supplied data

For me, the key difference between the options is that in option 3, we store and never change what the conference organizer entered. The organizer told us that the event would start at the given address in Amsterdam, at 9am on July 10th 2022. That’s what we stored, and that information never needs to change (unless the organizer wants to change it, of course). The UTC value is derived from that “golden” information, but can be re-derived if the context changes – such as when time zone rules change.

In option 2, we don’t store the original information – we only store derived information (the UTC instant). We need to store information to tell us all the context about how we derived it (the old time zone rules version) and when updating the entry, we need to get back to the original information before we can re-derive the UTC instant using the new rules.

If you’re going to need the original information anyway, why not just store that? The implementation ends up being simpler, and it means it doesn’t matter whether or not we even have the old time zone rules.

Representation vs information

It’s important to note that I’m only talking about preserving the core information that the organizer entered. For the purposes of this example at least, we don’t need to care about the representation they happened to use. Did they enter it as “July 10 2022 09:00” and we then parsed that? Did they use a calendar control that provided us with “2022-07-10T09:00”? I don’t think that’s important, as it’s not part of the core information.

It’s often a useful exercise to consider what aspects of the data you’re using are “core” and which are incidental. If you’re receiving data from another system as text for example, you probably don’t want to store the complete XML or JSON, as that choice between XML and JSON isn’t relevant – the same data could be represented by an XML file and a JSON file, and it’s unlikely that anything later will need to know or care.

A possible option 4?

I’ve omitted a fourth option which could be useful here, which is a mixture of 2 and 3. If you store a “date/time with UTC offset” then you’ve effectively got both the local start time and the UTC instant in a single field. To show the values again, you’d start off with:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T09:00:00+02:00
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • TimeZoneRules: 2019a

On March 14th 2020, when the time zone database 2020c is released, this is modified to:

  • ID: 1
  • Name: KindConf
  • Start: 2022-07-10T09:00:00+01:00
  • Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands
  • TimeZoneId: Europe/Amsterdam
  • TimeZoneRules: 2020c

In systems that support “date/time with UTC offset” well in both the database and the languages using it, this might be an attractive solution. It’s important to note that the time zone ID is still required (unless you derive it from the address whenever you need it) – there’s a huge difference between knowing the time zone that’s applied, and knowing the UTC offset in one specific situation.

Personally I’m not sure I’m a big fan of this option, as it combines original and derived data in a single field – the local part is the original data, and the offset is derived. I like the separation between original and derived data in option 3.

With all those options presented, let’s look at a few of the corner cases I’ve mentioned in the course of the post.

Ambiguous and skipped times

In both of the implementations I’ve shown, I’ve used the InZoneLeniently method from Noda Time. While the mapping from UTC instant to local time is always completely unambiguous for a single time zone, the reverse mapping (from local time to UTC instant) is not always unambiguous.

As an example, let’s take the Europe/London time zone. On March 31st 2019, at 1am local time, we will “spring forward” to 2am, changing offset from UTC+0 to UTC+1. On October 27th 2019, at 2am local time, we will “fall back” to 1am, changing offset from UTC+1 to UTC+0. That means that 2019-03-31T01:30 does not happen at all in the Europe/London time zone, and 2019-10-27T01:30 occurs twice.

Now it’s reasonable to validate this when a conference organizer specifies the starting time of a conference, either prohibiting it if the given time is skipped, or asking for more information if the given time is ambiguous. I should point out that this is highly unlikely for a conference, as transitions are generally done in the middle of the night – but other scenarios (e.g. when to schedule an automated backup) may well fall into this.

That’s fine at the point of the first registration, but it’s also possible that a previously-unambiguous local time could become ambiguous under new time zone rules. InZoneLeniently handles that in a way documented in the Resolvers.LenientResolver. That may well not be the appropriate choice for any given application, and developers should consider it carefully, and write tests.

Recurrent events

The example I’ve given so far is for a single event. Recurrent events – such as weekly meetings – end up being trickier still, as a change to time zone rules can change the offsets for some instances but not others. Likewise meetings may well be attended by people from more than a single time zone – so it’s vital that the recurrence would have a single coordinating time zone, but offsets may need to be recomputed for every time zone involved, and for every occurrence. Application developers have to think about how this can be achieved within performance requirements.

Time zone boundary changes and splits

So far we’ve only considered time zone rules changing. In options 2-4, we stored a time zone ID within the entry. That assumes that the time zone associated with the event will not change over time. That assumption may not be valid.

As far as I’m aware, time zone rules change more often than changes to which time zone any given location is in – but it’s entirely possible for things to change over time. Suppose the conference wasn’t in Amsterdam itself, but Rotterdam. Currently Rotterdam uses the Europe/Amsterdam time zone, but what if the Netherlands splits into two countries between 2019 and 2022? It’s feasible that by the time the conference occurs, there could be a Europe/Rotterdam time zone, or something equivalent.

To that end, a truly diligent application developer might treat the time zone ID as derived data based on the address of the conference. As part of checking each entry when the time zone database is updated, they might want to find the time zone ID of the address of the conference, in case that’s changed. There are multiple services that provide this information, although it may need to be a multi-step process, first converting the address into a latitude/longitude position, and then finding the time zone for that latitude/longitude.

Past vs recent past

This post has all been about future date/time values. In Twitter threads discussing time zone rule changes, there’s been a general assertion that it’s safe to only store the UTC instant related to an event in the past. I would broadly agree with that, but with one big caveat: as I mentioned earlier, sometimes governments adopt time zone rule changes with almost no notice at all. Additionally, there can be a significant delay between the changes being published and them being available within applications. (That delay can vary massively based on your platform.)

This means that while a conversion to UTC for a value more than (say) a year ago will probably stay valid, if you’re recording a date and time of “yesterday”, it’s quite possible that you’re using incorrect rules without knowing it. (Even very old rules can change, but that’s rarer in my experience.)

Do you need to account for this? That depends on your application, like so many other things. I’d at least consider the principle described above – and unless it’s much harder for you to maintain the real source information for some reason, I’d default to doing that.

Conclusion

The general advice of “just convert all local date/time data to UTC and store that” is overly broad in my view. For future and near-past events, it doesn’t take into account that time zone rules change, making the initial conversion potentially inaccurate. Part of the point of writing this blog post is to raise awareness, so that even if people do still recommend storing UTC, they can add appropriate caveats rather than treating it as a universal silver bullet.

I should explicitly bring up timestamps at this point. Machine-generated timestamps are naturally instants in time, recording “the instant at which something occurred” in an unambiguous way. Storing those in UTC is entirely reasonable – potentially with an offset or time zone if the location at which the timestamp was generated is relevant. Note that in this case the source of the data isn’t “a local time to be converted”.

That’s the bigger point, that goes beyond dates and times and time zones: choosing what information to store, and how. Any time you discard information, that should be a conscious choice. Are you happy discarding the input format that was used to enter a date? Probably – but it’s still a decision to make. Defaulting to “convert to UTC” is a default to discarding information which in some cases is valid, but not all. Make it a conscious choice, and ensure you store all the information you think may be needed later. You might also want to consider whether and how you separate “source” information from “derived” information – this is particularly relevant when it comes to archiving, when you may want to discard all the derived data to save space. That’s much easier to do if you’re already very aware of which data is derived.

My experience is that developers either don’t think about date/time details nearly enough when coding, or are aware of some of the pitfalls but decide that means it’s just too hard to contemplate. Hopefully this worked example of real life complexity shows that it can be done: it takes a certain amount of conscious thought, but it’s not rocket science.