Taking .NET MAUI for a spin

I’ve been keeping an eye on MAUI – the .NET Multi-platform App UI – for a while, but I’ve only recently actually given it a try.

MAUI is essentially the evolution of Xamarin.Forms, embracing WinUI 3 and expanding from a mobile focus to desktop apps as well. It’s still in preview at the time of writing, but only just – release candidate 1 came out on April 12th 2022.

I’ve been considering it as a long-term future for my V-Drum Explorer application, which is firmly a desktop app, just to make it available on macOS as well as Windows. However, when a friend mentioned that if only I had a mobile app for At Your Service (my church A/V system), it would open up new possibilities… well, that sounded like an opportunity to take MAUI for a spin.

This blog post is about initial impressions. It’s worth being really clear about that – please take both praise and criticism of MAUI with a pinch of salt. I’m not a mobile developer, I’m not a UI designer, I haven’t tried doing anything massively taxing with MAUI, and I may well be doing a bunch of things in the wrong way.

What’s the goal?

Besides “having fun”, the aim is to end up with a workable mobile app for At Your Service (AYS from here onwards). In an ideal world, that would work on iPhones, iPads, Android phones and Android tablets. In reality, the number of people who will ever use the app is likely to be 1 or 2 – and both of us have Android phones. So that’s all I’ve actually tested with. I may at some point try to build and test with an iPad just for kicks, but I’m not really bothered by it. As it happens, I’ve tried the Windows app version, but that hasn’t worked out for me so far – more details later.

So what does this mobile app need to do? While I dare say it may be feasible to replicate almost everything you can do with AYS, that’s not the aim here. I have no desire to create new service plans on my phone, nor to edit hymn words etc. The aim is only to use the application to “direct” a service without having to physically sit behind the A/V desk.

Crucially, there’s no sensible way that a mobile app could completely replace the desktop app, at least with our current hardware. While a lot of the equipment we use is networked (specifically the cameras and the mixer), the projector in the church building is connected directly to the desktop computer via an HDMI cable. (OBS Studio captures that output as a virtual webcam for Zoom.) Even if everything could be done with an entirely standalone mobile app, it would mean reimplementing or at least adapting a huge amount of code.

Instead, the aim is to make the mobile app an alternative control mechanism for an instance of AYS running on the church desktop in the normal way. I want it to be able to handle all the basic functionality used during a service:

  • Switch between “scenes” (where as scene in AYS is something like “a hymn” or “a reading” or “the preacher in a particular place”; switching between scenes brings up all the appropriate microphones and cameras, as well as whatever text/PowerPoint/video needs to be displayed)
  • Change pages in scenes with text content (e.g. hymns and liturgy)
  • Change slides in PowerPoint presentations
  • Play/pause for videos, along with volume control and simple “back 10 seconds” and “forward 10 seconds” buttons
  • Basic camera controls, across multiple cameras
    • Move to a preset
    • Change whether the camera is shown or not, and how it’s shown (e.g. “top right corner”)
  • Basic mixer controls
    • Mute/unmute microphones
    • Potentially change the volume for microphones – if I do this, I might want to change the volume for the Zoom output independently of the in-building output

What’s the architecture?

The desktop AYS system already has a slightly split architecture: the main application is 64-bit, but it launches a 32-bit secondary app which is a combined WPF + ASP.NET Core server to handle Zoom. (Until fairly recently, the Zoom SDK didn’t support 64-bit apps, and the 32-bit address space ended up causing problems when decoding multiple cameras.) That meant it wasn’t much of a stretch to figure out at least one possible architecture for the mobile app:

  • The main (desktop) AYS system runs an ASP.NET Core server
  • The mobile app connects to the main system via HTTP, polling for current status and making control requests such as “switch to scene 2”.

Arguably, it would be more efficient to use a gRPC stream to push updates from the desktop system to the mobile app, and at some point I might introduce gRPC into the mix, but frequent polling (about every 100ms) seems to work well enough. Sticking to just JSON and “regular” HTTP for requests and responses also makes it simple to test some aspects in a browser, too.

One quirk of both of the servers is that although they receive the requests on threadpool threads, almost all of them use the WPF dispatcher for execution. This means I don’t need to worry about (say) an status request seeing half the information from before a scene change and half the information after a scene change. It also means that the rest of the AYS desktop code can still assume that anything that affects the UI will happen on the dispatcher thread.

Even without using gRPC, I’ve made a potentially-silly choice of effectively rolling my own request handlers instead of using Web API. There’s a certain amount of wheel reinvention going on, and I may well refactor that away at some point. It does allow for some neatness though: there’s a shared project containing the requests and responses, and each request is decorated (via an attribute) with the path on which it should be served. The “commands” (request handlers) on the server are generic in the request/response types, and an abstract base class registers that command with the path on the request type. Likewise when making a request, a simple wrapper class around HttpClient can interrogate the request type to determine the path to use. At some point I may try to refactor the code to keep that approach to avoid duplication of path information, while not doing quite as much wheel reinvention as at the moment.

What does the UI look like? (And how does it work?)

When I first started doing a bit of research into how to create a MAUI app, there was a pleasant coincidence: I’d expected a tabbed UI, with one of the tabs for each part of the functionality listed above. As it happens, that’s made particularly easy in MAUI with the Shell page. Fortunately I found documentation for that before starting to use a more manually-crafted use of tabs. The shell automatically removes the tab indicator if only one tab is visible, and basically handles things reasonably simply.

The binding knowledge I’ve gradually gained from building WPF apps (specifically V-Drum Explorer and AYS) was almost immediately applicable – fortunately I saw documentation noting that the DataContext in WPF is BindingContext in MAUI, and from there it was pretty straightforward. The code is “mostly-MVVM” in a style that I’ve found to be pretty pragmatic when writing AYS: I’m not dogmatic about the views being completely code-free, but almost everything is in view models. I’ve always found command binding to be more trouble than it’s worth, so there are plenty of event handlers in the views that just delegate directly to the view model.

There’s a separate view model for each tab, and an additional “home” tab (and corresponding view model) which is just about choosing a system to connect to. (I haven’t yet implemented any sort of discovery broadcast. I don’t even have app settings – it’s just a manually-curated set of URLs to connect to.) The “home” view model contains a reference to each of the other view models, and they all have two features (not yet via an interface, although that could come soon):

  • Update the view model based on a status polling response
  • A property to determine whether the tab for the view model should be visible. (If there’s no text being displayed, we don’t need to display the text tab, etc.)

I’m not using any frameworks for MVVM: I have a pretty simplistic ViewModelBase which makes it easy enough to raise property-changed events, including automatically raising events for related properties that are indicated by attributes. I know that at some point I should probably investigate C# source generators to remove the boilerplate, but it’s low down my priority list.

MAUI supports dependency injection, and at some point when investigating navigating between different tabs (which initially didn’t work for reasons I still don’t understand) I moved to using DI for the view models, and it’s worked well. The program entry point is very readable (partly due to a trivial ConfigureServices extension method which I expect to be provided out-of-the-box at some point):

public static MauiApp CreateMauiApp() => MauiApp
    .CreateBuilder()
    .UseMauiApp<App>()
    .ConfigureFonts(fonts => fonts
        .AddFont("OpenSans-Regular.ttf", "OpenSansRegular")
        .AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold"))
    .ConfigureServices(services => services
        .AddSingleton<AppShell>()
        .AddSingleton<HomeViewModel>()
        .AddSingleton<MixerViewModel>()
        .AddSingleton<MultiCameraViewModel>()
        .AddSingleton<TextViewModel>()
        .AddSingleton<MediaViewModel>()
        .AddSingleton<PowerPointViewModel>()
        .AddSingleton<ScenesViewModel>()
        .AddSingleton<ApiClient>())
    .Build();

I’ve had to tweak the default style very slightly: the default “unselected tab” colour is almost unreadably faint, and for my use case I really need to be able to see which tabs are available at any given time. Fortunately the styling is pretty clear – it didn’t take much experimentation to get the effect I wanted. Likewise I added extra styles for the next/previous buttons for the PowerPoint and text tabs.

Sources of truth

One aspect I always find interesting in this sort of UI is what the source of truth is. As an example, what should happen when I select a different text page to display? Obviously I need to send a request to the main application to make the change, but what should the UI do? Should it immediately update, expecting that the request will be successful? Or should it only update when we next get a status polling response that indicates the change?

I’ve ended up going for the latter approach, after initially using the former. The main reason for this is to make the UI smoother. It’s easy to end up with race conditions when there’s no one source of truth. For example, here’s a situation I’ve seen happen:

  • T=0: Make status request
  • T=1: Status response: text page 3 is selected
  • T=2: Start status request
  • T=3: User clicks on page 4
  • T=4: Start “move to page 4” request
  • T=5: Status response: text page 3 is selected
  • T=6: Page change response: OK
  • T=7: Start status request
  • T=8: Status response: text page 4 is selected

(These aren’t meant to be times in seconds or anything – just a sequence of instants in time.)

If the UI changes at T=3 to show page 4 as the selected one, then it ends up bouncing back to page 3 at T=5, then back to page 4 at T=8. That looks really messy.

If instead we say that the only thing that can change the UI displaying the selected page is a status response, then we stay with page 3 selected from T=1 until T=8. The user needs to wait a little longer to see the result, but it doesn’t bounce between two sources of truth. As I’m polling every \~100ms, it doesn’t actually take very long to register. This also has the additional benefit that if the “change page” request fails, the UI still reflects the reality of the system state.

If this all sounds familiar from another blog post, that’s because it is. When originally writing about controlling a digital mixer using OSC an X-Touch Mini I observed the same thing. I’m sure there are plenty of cases where this approach doesn’t apply, but it seems to be working for me at the moment. It does affect how binding is used – effectively I don’t want to “allow” a list item to be selected, instead reacting to the intent to select it.

Screenshots

This section shows the tabs available, without very much explanation. I really wanted to include two of my favourite features: PowerPoint slide previews (little thumbnail images of the slides) and camera snapshots (so the user can see what a camera is currently pointing at, even if that camera isn’t being displayed on-screen at the moment). Unfortunately, images seem to be somewhat-broken in RC-1 at the moment. I can get the PowerPoint slides to display in my ListView if I just use an ImageCell, but that’s too restrictive. I can’t get the camera preview to display at all. I think it’s due to this issue but I’m not entirely sure.

With that caveat out of the way, let’s have a little tour of the app.

Home tab

On starting the app, it’s not connected to any system, so the user has to select one from the drop-down and connect. Notice how there are no tabs shown at this point.

Home tab (disconnected)

After connecting, the app shows the currently-loaded service (if there is one). If there’s a service loaded that contains any scenes at all, the Scenes tab is visible. The Mixer and Cameras tabs will always be visible when connected to a system (unless that system has no sound channels or no cameras, which seems unlikely).

In the screenshot below, the Text tab is also visible, because it so happens that the current scene contains text.

Home tab (connected)

Scenes tab

The Scenes tab shows the list of scenes, indicating which one is currently “active” (if any). If there is an active scene, the “Stop Scene” button is visible. (I’m considering having it just disabled if there’s no active scene, rather than it appearing and disappearing.)

Tapping on a scene launches it – and if that scene has text, PowerPoint or a video, it automatically navigates to that tab (as the next thing the user will probably want to do is interact with that tab).

Scenes tab

Text tab

The text tab shows the various pages of the text being displayed. Even though AYS supports styling (different colours of text, bold, italic etc) the preview is just plain text. It’s deliberately set at about 3 1/2 lines of text, which makes it obvious when there’s more being displayed than just what’s in the preview.

The user can select different pages by tapping on them – or just keep using the “next” button in the top right. The selected page is scrolled into view when there are more pages available than can be shown at a time.

Text tab

PowerPoint tab

The PowerPoint is like the text tab, but for PowerPoint slides. The screenshot below looks pretty awful due to the image display bug mentioned earlier. When preview images are working, they appear on the right hand side of the list view. (The slide numbers are still displayed.)

PowerPoint tab

Media tab

The media tab is used for videos, audio, and picures. (There’s nothing that can usefully be done with a single picture; at some point I want to create the idea of a “multi-picture media item” as an alternative to creating a PowerPoint presentation where each slide is just an image.)

As noted before, simple controls are available:

  • Play/pause
  • Back/forward (10 seconds)
  • Volume up/down (in increments of 10 – a slider would be feasible, but not terribly useful)

One thing I’ve found very useful in AYS in general is the indicator for the current position and the total length of the media item. The screenshot below shows that the media filename is shown in this tab – whereas it’s not in the PowerPoint tab at the moment (nor the title of the text item in the Text tab). I could potentially move the title to become the title of the tab, and put it in all three places… I’m really not sure at the moment.

Media tab

Mixer tab

The mixer tab shows which microphones are muted (toggled off) or unmuted (toggled on) as well as their current output gain within the church building (the numbers on the left hand side, in dB). At the moment, the only interaction is to mute and unmute channels; I’m not sure whether I’ll ever implement tweaking the volume. The intention is that this app is only for basic control – I’d still expect the user to be in the church building and able to access the computer for fine-grained control where necessary.

Mixer tab

Cameras tab

The cameras tab starts off with nothing useful: you have to select a camera in order to interact with it. At that point you can:

  • Change its window position
  • Change the “corner size” – when a camera position is top-left, top-right, bottom-left, bottom-right you can change that to be 1/2, 1/3, 1/4 or 1/5 of the size of the window
  • Move it to a different preset
  • Take a preview snapshot (currently not working)

As you can see by the screenshot below (taken from the church configuration) we have quite a few presets. Note that unlike the Scene/Text/PowerPoint tabs, there’s no concept of a “currently selected” preset, at least at the moment. Once the camera has moved to a preset, it can be moved separately on the desktop system, with a good deal more fine-tuning available. (That’s how new presets are created: pan/tilt/zoom to the right spot, then set that up as a new preview.) That fine-tuning isn’t available at all on the mobile app. At some point I could add “up a bit”, “down a bit” etc, but anything more than that would require a degree of responsiveness that I just don’t think I’d get with the current architecture. But again, I think that’s fine for the use cases I’m actually interested in.

Cameras tab

Conclusion

So that’s the app. There are two big questions, of course:

  • Is it useful?
  • What’s MAUI like to work with?

The answer to the first is an unqualified “yes” – more so than I’d expected. Just a couple of days ago, on Maundy Thursday, we had a communion service with everyone seated around a table. A couple of weeks earlier, I would have had to be sat apart from the rest of the congregation, at the A/V desk. That would definitely have disrupted the sense of fellowship, at least for me – and I suspect it would have made others feel slightly awkward too. With the mobile app, I was able to control it all discreetly from my place around the table.

In the future, I’m expecting to use the app mostly at the start of a service, if I have other stewarding duties that might involve me being up at the lectern to give verbal notices, for example. I still expect that for most services I’ll use the desktop AYS interface, complete with Stream Deck and X-Touch Mini… but it’s really nice to have the mobile app as an option.

In terms of MAUI – my feelings vary massively from minute to minute.

Let’s start off with the good: two weeks ago, this application didn’t exist at all. I literally started it on April 5th, and I used it to control almost every aspect of the A/V on April 10th. That’s despite me never having used either MAUI or Xamarin.Forms before, hardly doing any mobile development before, MAUI not being fully released yet, and all of the development only taking place in spare time. (I don’t know exactly how long I spent in those five days, but it can’t have been more than 8-12 hours.)

Despite being fully functional (and genuinely useful), the app required relatively little code to implement, and will be easy to maintain. Most of the time, debugging worked well through either the emulator or my physical device, allowing UI changes to be made without restarting (this was variable) and regular debugger operations (stepping through code) worked far better than it feels they have any right to given the multiple layers involved.

It’s not all sunshine and roses though:

  • The lack of a designer isn’t a huge problem, but it did make everything that bit harder when getting started.
  • Various bugs existed in the MAUI version I was using last week, some of which have now been fixed… but at the same time, other bugs have been introduced such as image one mentioned above.
  • I’ve seen various crashes that are pretty much infeasible for me to diagnose and debug, given my lack of knowledge of the underlying system:
    • One is an IllegalStateException with a message of “The specified child already has a parent. You must call removeView() on the child’s parent first.”
    • One is a NullPointerException for Object.toString()
    • I don’t know how to reproduce either of them right now.
  • Even when images were working, getting the layout and scaling right for them was very much a matter of trial and error. Various other aspects of layout have been surprising as well – I don’t know whether my expectations are incorrect, or whether these were bugs. I’m used to layouts sometimes being a bit of a mystery, but these were very odd.
  • The Windows app should provide an easy way of prototyping functionality without needing an emulator… and the home tab appears to work fine. Unfortunately the other tabs don’t magically appear (as they do on Android) after connecting, which makes it hard to make any further progress.
  • Sometimes the emulator seems to get stuck, and I can’t deploy to it. Unsticking it seems to be hit and miss. I don’t know whether this is an issue in the emulator itself, or how VS and MAUI are interacting with it.

In short, it’s very promising – but this doesn’t really feel like it’s release-candidate-ready yet. Maybe my stability expectations are too high, or maybe I’ve just been unlucky with the bugs I happen to have hit, but it doesn’t feel like I’ve been doing anything particularly unusual. I’m hopeful that things will continue to improve though, and maybe it’ll all be rock solid in 6 months or so.

I can see myself using MAUI for some desktop apps in the future – but I suspect that for anything that doesn’t naturally feel like it would just fit into a mobile app (with my limited design skills) I expect to keep using WPF. Now that I’ve got a bit of experience with MAUI, I can’t see myself porting V-Drum Explorer to it any time soon – it very much feels like “a mobile app framework that lets you run those mobile apps on the desktop”. That’s not a criticism as such; I suspect it’s an entirely valid product direction choice, it just happens not to be what I’m looking for.

All the problems aside, I’m still frankly astonished at getting a working, useful mobile app built in less than a week (and then polished a bit over the following week). Hats off to the MAUI team, and I look forward to seeing the rough edges become smoother in future releases.

What’s up with TimeZoneInfo on .NET 6? (Part 2)

In part 1, we ended up with a lot of test data specified in a text file, but without working tests – and with a conundrum as to how we’d test the .NET Core 3.1 data which requires additional information about the “hidden” AdjustmentRule.BaseUtcOffsetDelta property.

As with the previous blog post, this one is fairly stream-of-consciousness – you’ll see me changing my mind and spotting earlier mistakes as I go along. It’s not about trying to give advice – it’s intended for anyone who is interested in my thought process. If that’s not the kind of material you enjoy, I suggest you skip this post.

Abstracting platform differences

Well, time does wonders, and an answer to making most of the code agnostic to whether it’s running under .NET 6 or not now seems obvious: I can introduce my own class which is closer to the .NET 6 AdjustmentRule class, in terms of accessible properties. As it happens, a property of StandardOffset for the rule makes my code simpler than adding TimeZoneInfo.BaseUtcOffset and AdjustmentRule.BaseUtcOffsetDelta together every time I need it. But fundamentally I can put all the information I need into one class, and populate that class appropriately (without even needing a TimeZoneInfo) for tests, and use the TimeZoneInfo where necessary in the production code. (We have plenty of tests that use the actual TimeZoneInfo – but using just data from adjustment rules makes it easy to experiment with the Unix representation while on Windows.)

That means adding some derived data to our text file for .NET Core 3.1 – basically working out what the AdjustmentRule.BaseUtcOffsetDelta would be for that rule. We can do that by finding one instant in time within the rule, asking the TimeZoneInfo whether that instant is in daylight savings, and then comparing the prediction of “zone base UTC offset and maybe rule daylight savings” with the actual UTC offset.

With that in place, we get most of the way to passing tests – at least for the fixed data we’re loading from the text file.

Hidden data

There’s one problem though: Europe/Dublin in 1960. We have this adjustment rule in both .NET Core 3.1 and .NET 6:

1960-04-10 - 1960-10-02: Daylight delta: +00; DST starts April 10 at 03:00:00 and ends October 02 at 02:59:59.999

Now I know that should actually be treated as “daylight savings of 1 hour, standard offset of UTC+0”. The TimeZoneInfo knows that as well – if you ask for the UTC offset at (say) June 1st 1960, you the correct answer of UTC+1, and if you ask the TimeZoneInfo whether it’s in daylight savings or not, it returns true… but in most rules, a “daylight delta” of 0 means “this isn’t in daylight time”.

I believe this is basically a problem with the conversion from the internal rules to the publicly-available rules which loses some “hidden” bits of information. But basically it means that when I create my standardized adjustment rule, I need to provide some extra information. That’s annoying in terms of specifying yet more data in the text file, but it’s doable.

Given that the Unix rules and the Windows rules are really quite different, and I’ve already got separate paths for them (and everything still seems to be working on Windows), at this point I think it’s worth only using the “enhanced” adjustment rule code for Unix. That has the helpful property that we don’t need different ways of identifying the first instant in an adjustment rule: for Unix, you always use the daylight saving transition time of day; for Windows you never do.

At this point, I’m actually rethinking my strategy of “introduce a new type”. It’s got so much more than I really wanted it to have, I think I’m just going to split what was originally one method into two:

// Hard to test: needs a time zone
internal static BclAdjustmentRule FromUnixAdjustmentRule(TimeZoneInfo zone, TimeZoneInfo.AdjustmentRule rule)

... becomes ...

// Same signature as before, but now it just extracts appropriate information and calls the one below.
internal static BclAdjustmentRule FromUnixAdjustmentRule(TimeZoneInfo zone, TimeZoneInfo.AdjustmentRule rule)

// Easier to test with data from a text file
internal static BclAdjustmentRule FromUnixAdjustmentRule(TimeZoneInfo.AdjustmentRule rule,
    string standardName, string daylightName, TimeSpan zoneStandardOffset, TimeSpan ruleStandardOffset,
    bool forceDaylightSavings)

The unit tests that I’m trying to get to pass with just rule data just need to call into the second method. The production code (tested in other unit tests, but only on the “right” system) will call the first method.

(In the end, it turns out it’s easier to make the second method return a ZoneInterval rather than a BclAdjustmentRule, but the idea is the same.)

Are we nearly there yet?

At this point, I wish I’d paid slightly more attention while changing the code… because the code that did work for America/Sao_Paulo in 2018 is now failing for .NET 6. I can see why it’s failing now – I’m not quite so sure why it worked a few hours before.

The problem is in this pair of adjustment rules:

2017-10-15 - 2017-12-31: Daylight delta: +01; DST starts October 15 at 00:00:00 and ends December 31 at 23:59:59.999
2018-01-01 - 2018-02-17: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends February 17 at 23:59:59.999

These should effectively be merged: the end of the first rule should be the start of the second rule. In most rules, we can treat the rule as starting at the combination of “start date and start transition time” with a UTC offset of “the base offset of the zone” (not the rule). We then treat the rule as ending as the combination of “end date and end transition time” with a UTC offset of “the base offset of the zone + daylight delta of the rule”. But that doesn’t work in the example above: we end up with a gap of an hour between the two rules.

There’s a horrible hack that might fix this: if the end date is on December 31st with an end transition time of 23:59:59 (with any subsecond value), we could ignore daylight savings.

In implementing that, I found a commented out piece of code which did it, which was effectively abandoned in the refactoring described above – so that explains my confusion about why the code had only just stopped working.

With that in place, the data-based unit tests are green.

Now to run the main set of unit tests…

TimeZoneInfo-based unit tests

Just to reiterate, I have two types of tests for this code:

  • Unit tests based on rules described in a text file. These can be run on any implementation, and always represent “the rule data we’d expect to see on Unix”. There are currently 16 tests here – specific periods of history for specific time zones.
  • Unit tests based on the TimeZoneInfo objects exposed by the BCL on the system we’re running on. These include checking every time zone, and ensuring that every transition between 1950 and either 2037 or 2050 (depending on the system, for reasons I won’t go into now) is the same between the TimeZoneInfo representation, and the Noda Time representation of the TimeZoneInfo.

The first set of tests is what we’ve been checking so far – I now need to get the second set of tests working in at least four contexts: .NET Core 3.1 and .NET 6, on both Windows and Linux.

When I started this journey, the tests were working on Windows (both .NET Core 3.1 and .NET 6) and on Linux on .NET Core 3.1. It was only .NET 6 that was failing. Let’s start by checking that we haven’t broken anything on Windows… yes, everything is still working there. That’s pretty unsurprising, given that I’ve aimed to keep the Windows code path exactly the same as it was before. (There’s a potential optimization I can apply using the new BaseUtcOffsetDelta property on .NET 6, but I can apply that later.)

Next is testing .NET Core 3.1 on Linux – this used to work, but I wouldn’t be surprised to see problems introduced by the changes… and indeed, the final change I made broke lots of time zones due to trying to add daylight savings to DateTime.MaxValue. That’s easily fixed… and with that fix in place there are two errors. That’s fine – I’ll check those later and add text-file-data-based tests for those two time zones. Let’s check .NET 6 first though, which had large numbers of problems before… now we have 14. Definite progress! Those 14 failures seem to fall into two categories, so I’ll address those first.

Adding more test data and fixing one problem

First, I’m going to commit the code I’ve got. It definitely needs changing, but if I try something that doesn’t help, I want to be able to get back to where I was.

Next, let’s improve the exception messages thrown by the .NET 6 code. There are two exceptions, that currently look like this:

(For Pacific/Wallis and others)
System.InvalidOperationException : Zone recurrence rules have identical transitions. This time zone is broken.

(For America/Creston and others)
System.ArgumentException : The start Instant must be less than the end Instant (Parameter 'start')

Both have stack traces of code, but that doesn’t help me know what the invalid values are, which means I can’t easily find the relevant rules to work on.

After a few minutes of work, this is fixed and the output is much more informative:

(For Pacific/Wallis and others)
System.InvalidOperationException : Zone recurrence rules have identical transitions. This time zone is broken. Transition time: 0002-12-31T11:45:00Z

(For America/Creston and others)
System.ArgumentException : The start Instant must be less than the end Instant. start: 1944-01-01T07:00:00Z; end: 1944-01-01T06:01:00Z (Parameter 'start')

The broken rule for Pacific/Wallis is particularly interesting – year 2AD! So let’s see what the rules look like in textual form. First let’s look at Pacific Wallis

Pacific/Wallis

.NET Core 3.1:
Base offset = 12
0001-01-01 - 1900-12-31: Base UTC offset delta: +00:15; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:44:39
1900-12-31 - 2038-01-19: Daylight delta: +00; DST starts December 31 at 23:44:40 and ends January 19 at 15:14:06
2038-01-19 - 9999-12-31: Daylight delta: +00; DST starts January 19 at 15:14:07 and ends December 31 at 23:59:59

.NET 6.0:
Base offset = 12
0001-01-01 - 0001-12-31: Base UTC offset delta: +00:15; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
0002-01-01 - 1899-12-31: Base UTC offset delta: +00:15; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
1900-01-01 - 1900-12-31: Base UTC offset delta: +00:15; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:44:39.999

Noda Time zone intervals:
0001-01-01T00:00:00Z - 1900-12-31T11:44:40Z, +12:15:20, +0
1900-12-31T11:44:40Z - 9999-12-31T23:59:59Z, +12, +0

The first Noda Time zone interval extends from the start of time, and the second one extends to the end of time. I haven’t yet decided whether I’ll actually try to represent all of this in the “regular” kind of test. The offsets shown as +00:15 should actually be +00:15:20, but it looks like .NET doesn’t handle sub-minute offsets. That’s interesting… I can easily change the Noda Time data to round to expect 12:15 of course.

Both .NET Core 3.1 and .NET 6 have pretty “interesting” representations here:

  • Why does .NET Core 3.1 have a new rule in 2038? It’s no coincidence that the instant being represented is 231 seconds after the Unix epoch, I’m sure… but there’s no need for a new rule.
  • Why does .NET 6 have one rule for year 1AD and a separate rule for years 2 to 1899 inclusive?
  • Why use an offset rounded to the nearest minute, but keep the time zone transition at 1900-12-31T11:44:40Z?

It’s not clear to me just from inspection why this would cause the Noda Time conversion to fail, admittedly. But that’ll be fun to dig into. Before we do, let’s find the test data for America/Creston, around 1944:

America/Creston

.NET Core 3.1:
Base offset = -7
1942-02-09 - 1944-01-01: Daylight delta: +01; DST starts February 09 at 02:00:00 and ends January 01 at 00:00:59
1943-12-31 - 1944-04-01: Daylight delta: +00; DST starts December 31 at 23:01:00 and ends April 01 at 00:00:59
1944-04-01 - 1944-10-01: Daylight delta: +01; DST starts April 01 at 00:01:00 and ends October 01 at 00:00:59
1944-09-30 - 1967-04-30: Daylight delta: +00; DST starts September 30 at 23:01:00 and ends April 30 at 01:59:59

.NET 6.0:
Base offset = -7
1942-02-09 - 1942-12-31: Daylight delta: +01; DST starts February 09 at 02:00:00 and ends December 31 at 23:59:59.999
1943-01-01 - 1943-12-31: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
1944-01-01 - 1944-01-01: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends January 01 at 00:00:59.999
1944-04-01 - 1944-10-01: Daylight delta: +01; DST starts April 01 at 00:01:00 and ends October 01 at 00:00:59.999
1967-04-30 - 1967-10-29: Daylight delta: +01; DST starts April 30 at 02:00:00 and ends October 29 at 01:59:59.999

Noda Time zone intervals:

1942-02-09T09:00:00Z - 1944-01-01T06:01:00Z, -7, +1
1944-01-01T06:01:00Z - 1944-04-01T07:01:00Z, -7, +0
1944-04-01T07:01:00Z - 1944-10-01T06:01:00Z, -7, +1
1944-10-01T06:01:00Z - 1967-04-30T09:00:00Z, -7, +0

Well those are certainly “interesting” rules – and I can immediately see why Noda Time has rejected the .NET 6 rules. The third rule starts at 1944-01-01T07:00:00Z (assuming that the 1944-01-01T00:00:00 is in zone standard time of UTC-7) and finishes at 1944-01-01T06:01:00Z (assuming that the 1944-01-01T00:00:59.999 is in daylight time).

Part of the problem is that we’ve said before that if a rule ends on December 31st at 23:59:59, we’ll interpret that as being in standard time instead of being in daylight time… which means that the second rule would finish at 1944-01-01T07:00:00Z – but we genuinely want it to finish at 06:00:00Z, and maybe understand the third rule to mean 1944-01-01T06:00:00Z to 19:44-01-01T06:01:00Z for that last minute of daylight time before we observe standard time until April 1st.

We could do that by adding special rules:

  • If a rule appears to end before it starts, assume that the start should be treated as daylight time instead of standard time. (That would make the third rule valid, covering 1944-01-01T06:00:00Z to 19:44-01-01T06:01:00Z.)
  • If one rule ends after the next one starts, change its end point to the start of the next one. I currently have the reverse logic to this, changing the start point of the next one instead. That wouldn’t help us here. I can’t remember exactly why I’ve got the current logic: I need to add some comments on this bit of code…

Astonishingly, this works, getting us down to 8 errors on .NET 6. Of these, 6 are the same kind of error as Pacific/Wallis, but 2 are unfortunately of the form “you created a time zone successfully, but it doesn’t give the same results as the original one”. Hmm.

Still, let’s commit this change and move on to Pacific/Wallis.

Handling Pacific/Wallis

Once I’d added the Pacific/Wallis data, those tests passed – which means the problem must lie somewhere in how the results of the rules are interpreted in order to build up a DateTimeZone from the converted rules. That’s logic’s already in a BuildMap method, which I just need to make internal instead of private. That also contains some code which we’re essentially duplicating in the test code (around inserting standard zone intervals if they’re missing, and coalescing some zone intervals together). At some point I want to refactor both the production and test code to remove the duplication – but I want to get to working code first.

I’ll add a new test, just for Pacific/Wallis (as that’s the only test case we’ve got which is complete from the start to the end of time), and which just constructs the map. I expect it will throw an exception, so I’m not actually going top assert anything about the result yet.

Hmm. It doesn’t throw. That’s weird. Let’s rerun the full time zone tests to make sure we still have a problem at all… yes, it’s still failing.

At this point, my suspicion is that some of the code that is “duplicated” between production and test code really isn’t quite duplicated at all. Debugging the code on Linux is going to be annoying, so let’s go about this the old-fashioned way: logging.

I’d expected to be able to log the zone interval for each part of the map… but PartialZoneIntervalMap.GetZoneInterval fails, which is really, really weird. What’s even weirder is that the stack trace includes StandardDaylightAlternatingMap – which is only used in the Windows rules.

All my unit tests assume we’ve recognized that the adjustment rules are from Unix… but the ones for Pacific/Wallis actually look like Windows ones: on .NET 6, they start on January 1st, and end on December 31st.

Let’s add a grotty hack: I happen to know that Windows time zone data never has any “really old” rules other than ones that start at the beginning of time – if we see anything with a start year that’s not 1 and isn’t after (say) 1600, that’s going to be a Unix rule.

Put that extra condition in, and lo and behold, Pacific/Wallis starts working – hooray!

Let’s rerun everything…

So, running all the tests on every framework/platform pair that I can easily test, we get:

  • Linux, .NET 6: 2 failures – Australia/Broken_Hill and Antarctica/Macquarie
  • Linux, .NET Core 3.1: 3 failures – Asia/Dhaka, Australia/Broken_Hill and Antarctica/Macquarie
  • Windows, .NET 6: all tests pass
  • Windows, .NET Core 3.1: all tests pass

All the remaining failures are of the form “the offset is wrong at instant X”.

So, joy of joys, we’re back to collecting more data and adding more test cases.

First though, I’ll undo making a few things internal that didn’t actually help in the end. I might redo them later, but I don’t need them now. Basically the extra test to create the full map didn’t give me any more insight. (There’s a bunch of refactoring to do when I’ve got all the tests passing, but I might as well avoid having more changes than are known to be helpful.)

Going for green

At this point, I should reveal that I have a hunch. One bit of code I deleted when rewriting the “Unix rule to zone interval” conversion code was the opposite of the issue I described earlier of rules which are in daylight savings, but have a DaylightDelta of zero. The code I deleted basicaly said “If the time zone says it’s not in daylight savings, but DaylightDelta is non-zero, then treat it as zero anyway.” So I’m hoping that’s the issue, but I want to get the test data first. I’m hoping that it’s the same for all three time zones that are having problems though. We’ll start with Australia/Broken_Hill, which is failing during 1999. Dumping the rules in .NET 6 and .NET Core 3.1 under Linux, and looking at the Noda Time tzvalidate page, I get:

Base UTC offset: 09:30

.NET Core 3.1:
1998-10-25 - 1999-03-28: Daylight delta: +01; DST starts October 25 at 02:00:00 and ends March 28 at 02:59:59
1999-03-28 - 1999-10-31: Daylight delta: +00; DST starts March 28 at 02:00:00 and ends October 31 at 01:59:59
1999-10-31 - 1999-12-31: Daylight delta: +01; DST starts October 31 at 02:00:00 and ends December 31 at 23:59:59
1999-12-31 - 2000-03-26: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends March 26 at 02:59:59

.NET 6:

1999-01-01 - 1999-03-28: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends March 28 at 02:59:59.999
1999-10-31 - 1999-12-31: Daylight delta: +01; DST starts October 31 at 02:00:00 and ends December 31 at 23:59:59.999
1999-12-31 - 1999-12-31: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends December 31 at 23:59:59.999
2000-01-01 - 2000-03-26: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends March 26 at 02:59:59.999

Noda Time:
1998-10-24T16:30:00Z - 1999-03-27T16:30:00Z, +09:30, +01
1999-03-27T16:30:00Z - 1999-10-30T16:30:00Z, +09:30, +00
1999-10-30T16:30:00Z - 2000-03-25T16:30:00Z, +10:30, +01

Annoyingly, my test data parser doesn’t handle partial-hour base UTC offsets at the moment, so I’m going to move on to Antarctica/Macquarie – I’ll put Broken Hill back in later if I need to. The text format of Broken Hill would be:

Australia/Broken_Hill
Base offset = 09:30
---
1998-10-25 - 1999-03-28: Daylight delta: +01; DST starts October 25 at 02:00:00 and ends March 28 at 02:59:59
1999-03-28 - 1999-10-31: Daylight delta: +00; DST starts March 28 at 02:00:00 and ends October 31 at 01:59:59
1999-10-31 - 1999-12-31: Daylight delta: +01; DST starts October 31 at 02:00:00 and ends December 31 at 23:59:59
1999-12-31 - 2000-03-26: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends March 26 at 02:59:59
---
1999-01-01 - 1999-03-28: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends March 28 at 02:59:59.999
1999-10-31 - 1999-12-31: Daylight delta: +01; DST starts October 31 at 02:00:00 and ends December 31 at 23:59:59.999
1999-12-31 - 1999-12-31: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends December 31 at 23:59:59.999
2000-01-01 - 2000-03-26: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends March 26 at 02:59:59.999
---
1998-10-24T16:30:00Z - 1999-03-27T16:30:00Z, +09:30, +01
1999-03-27T16:30:00Z - 1999-10-30T16:30:00Z, +09:30, +00
1999-10-30T16:30:00Z - 2000-03-25T16:30:00Z, +10:30, +01

However, let’s have a look at Antarctica/Macquarie, which is broken in 2009. Here’s the data:

Antarctica/Macquarie
Base UTC offset: +10

.NET Core 3.1:
2008-10-05 - 2009-04-05: Daylight delta: +01; DST starts October 05 at 02:00:00 and ends April 05 at 02:59:59
2009-04-05 - 2009-10-04: Daylight delta: +00; DST starts April 05 at 02:00:00 and ends October 04 at 01:59:59
2009-10-04 - 2009-12-31: Daylight delta: +01; DST starts October 04 at 02:00:00 and ends December 31 at 23:59:59
2009-12-31 - 2011-04-03: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends April 03 at 02:59:59

.NET 6.0:
2008-10-05 - 2008-12-31: Daylight delta: +01; DST starts October 05 at 02:00:00 and ends December 31 at 23:59:59.999
2009-01-01 - 2009-04-05: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends April 05 at 02:59:59.999
2009-10-04 - 2009-12-31: Daylight delta: +01; DST starts October 04 at 02:00:00 and ends December 31 at 23:59:59.999
2009-12-31 - 2009-12-31: Daylight delta: +01; DST starts December 31 at 23:00:00 and ends December 31 at 23:59:59.999
2010-01-01 - 2010-12-31: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
2011-01-01 - 2011-04-03: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends April 03 at 02:59:59.999

Noda Time:
2008-10-04T16:00:00Z - 2009-04-04T16:00:00Z +10, +01
2009-04-04T16:00:00Z - 2009-10-03T16:00:00Z +10, +00
2009-10-03T16:00:00Z - 2011-04-02T16:00:00Z +10, +01

(The .NET 6 data needs to go as far as 2011 in order to include all the zone intervals for 2009, because there were no changes in 2010.)

Good news! The test fails… but only in .NET Core 3.1.

(A few days later… this blog post is being written in sporadic bits of spare time.)

Okay, let’s check I can still reproduce this – in .NET 6 on Linux, BclDateTimeZone test that converts a TimeZoneInfo to a BclDateTimeZone fails for Antarctica/Macquarie because it gives the wrong offset at 2009-10-04T00:00:00Z – the TimeZoneInfo reports +11, and the BclDateTimeZone reports +10. But the unit test for the .NET 6 data apparently gives the correct ZoneInterval. Odd.

Again, this is tricky to debug, so I’ll add some logging. Just a simple “new” test that logs all of the zone intervals in the relevant period. The results are:

Australian Eastern Standard Time: [2008-04-05T16:00:00Z, 2008-10-04T16:00:00Z) +10 (+00)
Australian Eastern Daylight Time: [2008-10-04T16:00:00Z, 2009-04-04T16:00:00Z) +11 (+01)
Australian Eastern Standard Time: [2009-04-04T16:00:00Z, 2009-12-31T13:00:00Z) +10 (+00)
Australian Eastern Daylight Time: [2009-12-31T13:00:00Z, 2011-04-02T16:00:00Z) +11 (+01)
Australian Eastern Standard Time: [2011-04-02T16:00:00Z, 2011-10-01T16:00:00Z) +10 (+00)
Australian Eastern Daylight Time: [2011-10-01T16:00:00Z, 2012-03-31T16:00:00Z) +11 (+01)

It looks like we’re not adding in the implicit standard time zone interval between April and October 2009. This is code that I’m aiming to deduplicate between the production code and the rule-data-oriented unit tests – it looks like the unit test code is doing the right thing, but the production code isn’t.

(In the process of doing this, I’ve decided to suppress warning CA1303 – it’s completely useless for me, and actively hinders simple debug-via-console-logging.)

Adding extra logging to the BuildMap method, it looks like we’ve already lost the information by the time we get there: there’s no sign of the October 2009 date anywhere. Better look further back in the code…

… and immediately spot the problem. This code, intended to handle overlapping rules:

convertedRules[i - 1] = convertedRules[i].WithEnd(convertedRules[i].Start);

… should be:

convertedRules[i - 1] = convertedRules[i - 1].WithEnd(convertedRules[i].Start);

Again, that’s code which isn’t covered by the rule-data-oriented unit tests. I’m really looking forward to removing that duplication. Anyway, let’s see what that fix leaves us with… oh. It hasn’t actually fixed it. Hmm.

Ha. I fixed it in this blog post but not in the actual code. So it’s not exactly a surprise that the tests were still broken!

Having actually fixed it, now the only failing test in .NET 6 on Linux is the one testing the .NET Core 3.1 data for Antarctica/Macquarie. Hooray! Running the tests for .NET Core 3.1, that’s also the only failing test. The real time zone seems to be okay. That’s odd… and suggests that my test data was incorrectly transcribed. Time to check it again… no, it really is that test data. Hmm. Maybe this time there’s another bug in the code that’s intended to be duplicated between production and tests, but this time with the bug in the test code.

Aha… the .NET Core 3.1 test code didn’t have the “first fix up overlapping rules” code that’s in the .NET 6 tests. The circumstances in which that fix-up is needed happen much more rarely when using the .NET Core 3.1 rules – this is the first time we’d needed it for the rule-data-oriented tests, but it was happening unconditionally in the production code. So that makes sense.

Copy that code (which now occurs three times!) and the tests pass.

All the tests are green, across Windows and Linux, .NET Core 3.1 and .NET 6.0 – wahoo!

Time for refactoring

First things first: commit the code that’s working. I’m pretty reasonable at refactoring, but I wouldn’t put it past myself to mess things up.

Okay, let’s try to remove as much of the code in the tests as possible. They should really be pretty simple. It’s pretty easy to extract the code that fixes up overlapping adjustment rules – with a TODO comment that says I should really add a test to make sure the overlap is an expected kind (i.e. by exactly the amount of daylight savings, which should be the same for both rules). I’ll add that later.

The part about coalescing adjacent intervals is trickier though – that’s part of a process creating a “full” zone interval map, extending to the start and end of time. It’s useful to do that, but it leaves us with a couple of awkward aspects of the existing test data. Sometimes we have DST adjustment rules at the start or end of the .NET 6 data just so that the desired standard time rule can be generated.

In the tests we just committed, we accounted for that separately by removing those “extra” rules before validating the results. It’s harder to do that in the new tests, and it goes against the aim of making the tests as simple as possible. Additionally, if the first or last zone interval is a standard time one, the “create full map” code will extend those intervals to the start of time, which isn’t what we want either.

Rather than adding more code to handle this, I’ll just document that all test data must start and end with a daylight zone interval – or the start/end of time. Most of the test data already complies with this – I just need to add a bit more information for the others.

Interestingly, while doing this, I found that there’s an odd discrepancy between data sources for Europe/Prague in 1945 in terms of when daylight savings started:

  • TimeZoneInfo says April 2nd in one rule, and then May 8th in another
  • Noda Time (and tzvalidate, and zdump) says April 2nd
  • timeanddate.com says April 8th

Fortunately, specifying both of the rules from TimeZoneInfo ends up with the tests passing, so I’ll take that.

With those data changes in, everything’s nicer and green – so let’s commit again.

Next up, there’s a bit of rearranging of the test code itself. Literally just moving code around, mostly moving it into the two nested helper classes.

Again, run all the tests – still green, so let’s commit again. (I’m planning on squashing all of these commits together by the end, of course.)

Next there’s a simplification option which I’d noted before but never found time to implement – just removing a bit of redundancy. While fixing that, I’ve noticed we’ve got IZoneIntervalMap and IZoneIntervalMapWithMinMax… if we could add min/max offset properties to IZoneIntervalMap, we could simplify things further. I’ve just added a TODO for this though, as it’s a larger change than I want to include immediately. Run tests, commit on green.

When comments become more than comments

Now for some more comments. The code I’ve got works, but the code to handle the corner cases in BclAdjustmentRule.ConvertUnixRuleToBclAdjustmentRule isn’t commented clearly. For every case, I’m going to have a comment that:

  • Explains what the data looks like
  • Explains what it’s intended to mean
  • Gives a sample zone + framework (ideally with a date) so I can look at the data again later on

Additionally, the code that goes over a set of converted Unix rules and fixes up the end instant for overlapping rules needs:

  • More comments including an example
  • Validation to ensure the fix-up only occurs in an expected scenario
  • A test for that validation (with deliberately broken rules)

When trying to do this, I’ve found it hard to justify some of the code. It’s all just a bit too hacky. With tests in place that can be green, I’ve tried to improve things – in particular, there was some code to handle a rule ending at the very end of the year, which changed the end point from daylight time to standard time. That feels like it’s better handled in the “fix-up” code… but even that’s really hard to justify.

What I can do is leave the code there with some examples of what fails. I’m still hoping I can simplify it more later though.

A new fly in the ointment

In the course of doing this, I’ve discovered one additional tricky aspect: in .NET 6, the last adjustment rule can be a genuine alternating one rather than a fixed one. For example, Europe/London finishes with:

2037-10-25 - 9999-12-31: Daylight delta: +01; DST starts Last Sunday of March; 01:00:00 and ends Last Sunday of October; 02:00:00

Currently we don’t take that into account, and that will make life trickier. Sigh. That rule isn’t too hard to convert, but it means I’m unlikely to get there today after all.

It shouldn’t be too hard to test this though: we currently have this line of code in BclDateTimeZoneTest:

// Currently .NET Core doesn't expose the information we need to determine any DST recurrence
// after the final tzif rule. For the moment, limit how far we check.
// See https://github.com/dotnet/corefx/issues/17117
int endYear = TestHelper.IsRunningOnDotNetCoreUnix ? 2037 : 2050;

That now needs to be “on .NET Core 3.1, use 2037; on .NET 6 use 2050”. With that change in place, I expect the tests to fail. I’ve decided I won’t actually implement that in the first PR; let’s get all the existing tests working first, then extend them later.

Let’s get it merged…

Even though there’s more work to do, this is much better than it was.

It’s time to get it merged, and then finish up the leftover work. I might also file an issue asking for Microsoft to improve the documentation and see if they’re able to provide sample code that makes sense of all of this…

Diagnosing an ASP.NET Core hard crash

As part of my church A/V system (At Your Service), I run a separate local web server to interact with the Zoom SDK. Initially this was because the Zoom SDK would only run in 32-bit processes and I needed a 64-bit process to handle the memory requirements for the rest of the app. However, it’s also proven useful in terms of keeping the Zoom meeting for a church service alive if At Your Service crashes. Obviously I try hard to avoid that happening, but when interoperating with a lot of native code (LibVLC, NDI, the Stream Deck, PowerPoint via COM) there are quite a few avenues for crashes. The web server runs ASP.NET Core within a WPF application to make it easy to interact with logs while it’s running, and to give the Zoom SDK a normal event dispatcher.

Yesterday when trying to change my error handling code significantly, I found that the web server was crashing hard, with no obvious trace of what’s going on. I’ve already spent a little time trying to figure out what’s going on, but I couldn’t get to the bottom of it. I know the immediate cause of the crash, and I’ve fixed that fairly easily – but I want to harden the web server against any further bugs I might introduce. I figured it would be useful to blog about that process as I went along.

What I know so far

The immediate crash was due to an exception being thrown in an async void method.

Relevant threading aspects:

  • I start the ASP.NET Core app in a separate thread (although that’s probably unnecessary anyway, now that I think about it) calling IHost.Start
  • I have handlers for Dispatcher.UnhandledException and TaskScheduler.UnobservedTaskException
  • I execute all Zoom-specific code on the WPF Dispatcher thread

The immediate error came from code like the following. You can ignore the way this effectively reproduces Web API to some extent… it’s the method body that’s important.

public abstract class CommandBase<TRequest, TResponse> : CommandBase
{
    public override async Task HandleRequest(HttpContext context)
    {
        var reader = new StreamReader(context.Request.Body);
        var text = await reader.ReadToEndAsync();
        var request = JsonUtilities.Parse<TRequest>(text);

        var dispatcher = Application.Current.Dispatcher;
        try
        {
            var response = await dispatcher.Invoke(() => ExecuteAsync(request));
            var responseJson = JsonUtilities.ToJson(response);
            await context.Response.WriteAsync(responseJson);
        }
        catch (ZoomSdkException ex)
        {
            SetExceptionResponse(new ZoomExceptionResponse { /* More info here */ });
        }

        async void SetExceptionResponse(ZoomExceptionResponse response)
        {
            var responseJson = JsonUtilities.ToJson(response);
            await context.Response.WriteAsync(responseJson);
            context.Response.StatusCode = 500;
        }
    }

    public abstract Task<TResponse> ExecuteAsync(TRequest request);
}

There are at least three problems here:

  • I’m trying to set HttpResponse.StatusCode after writing the body
  • The SetExceptionResponse method is async void (generally a bad idea)
  • I’m not awaiting the call to SetExceptionResponse (which I can’t, due to it returning void)

(It’s also a bit pointless having a local method there. This code could do with being rewritten when I don’t have Covid brain fog, but hey…)

The first of these causes an InvalidOperationException to be thrown. The second and third, between them, cause the app to crash. The debugger has been no help here in working out what’s going on.

Create with a console app to start ASP.NET Core

It feels like this should be really easy to demonstrate in a simple console app that does nothing but start a web server which fails in this particular way.

At this stage I should say how much I love the new top-level statements in C# 10. They make simple complete examples an absolute joy. So let’s create a console app, change the Sdk attribute in the project file to Microsoft.NET.Sdk.Web, and see what we can do. I’m aware that with ASP.NET Core 6 there are probably even simpler ways of starting the server, but this will do for now:

using System.Net;

var host = Host.CreateDefaultBuilder()
    .ConfigureWebHostDefaults(builder => builder
        .ConfigureKestrel((context, options) => options.Listen(IPAddress.Loopback, 8080))
        .Configure(application => application.Run(HandleRequest)))
    .Build();
host.Start();
host.WaitForShutdown();

async Task HandleRequest(HttpContext context)
{
    await context.Response.WriteAsync("Testing");
}

Trying to run that initially brought up prompts about IIS Express and trusting SSL certificates – all very normal for a regular web app, but not what I want here. After editing launchSettings.json to a simpler set of settings:

{
"profiles": {
"AspNetCoreCrash": {
"commandName": "Project"
}
}
}

… I can now start the debugger, then open up localhost:8080 and get the testing page. Great.

Reproduce the exception

Next step: make sure I can throw the InvalidOperationException in the same way as the original code. This is easy, just replacing the body of the HandlRequest method:

async Task HandleRequest(HttpContext context)
{
    await context.Response.WriteAsync("Testing");
    context.Response.StatusCode = 500;
}

Sure enough the console logs show that it’s failed as expected:

System.InvalidOperationException: StatusCode cannot be set because the response has already started.
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ThrowResponseAlreadyStartedException(String value)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.set_StatusCode(Int32 value)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.Microsoft.AspNetCore.Http.Features.IHttpResponseFeature.set_StatusCode(Int32 value)
  at Microsoft.AspNetCore.Http.DefaultHttpResponse.set_StatusCode(Int32 value)
  at Program.<<Main>$>g__HandleRequest|0_1(HttpContext context) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 19
  at Microsoft.WebTools.BrowserLink.Net.BrowserLinkMiddleware.ExecuteWithFilterAsync(IHttpSocketAdapter injectScriptSocket, String requestId, HttpContext httpContext)
  at Microsoft.AspNetCore.Watch.BrowserRefresh.BrowserRefreshMiddleware.InvokeAsync(HttpContext context)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

… but (again, as expected) the server is still running. It’s interesting that BrowserLink occurs in the stack trace – I suspect that wouldn’t be the case in my

Let’s try making the failure occur in the same way as in At Your Service:

async Task HandleRequest(HttpContext context)
{
    // In AYS we await executing code in the dispatcher;
    // Task.Yield should take us off the synchronous path.
    await Task.Yield();
    WriteError();

    async void WriteError()
    {
        await context.Response.WriteAsync("Testing");
        context.Response.StatusCode = 500;
    }
}

This time we get a longer stack trace, and the process quits, just like in AYS:

System.InvalidOperationException: StatusCode cannot be set because the response has already started.
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ThrowResponseAlreadyStartedException(String value)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.set_StatusCode(Int32 value)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.Microsoft.AspNetCore.Http.Features.IHttpResponseFeature.set_StatusCode(Int32 value)
   at Microsoft.AspNetCore.Http.DefaultHttpResponse.set_StatusCode(Int32 value)
   at Program.c__DisplayClass0_0.<<<Main>$>g__WriteError|4>d.MoveNext() in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 19
--- End of stack trace from previous location ---
   at System.Threading.Tasks.Task.c.b__128_1(Object state)
   at System.Threading.QueueUserWorkItemCallback.c.b__6_0(QueueUserWorkItemCallback quwi)
   at System.Threading.ExecutionContext.RunForThreadPoolUnsafe[TState](ExecutionContext executionContext, Action`1 callback, TState& state)
   at System.Threading.QueueUserWorkItemCallback.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

This happens in both the debugger and when running from the command line.

Setting a break point in the WriteError method shows a stack trace like this:

   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
   at Program.c__DisplayClass0_0.<<Main>$>g__WriteError|4()
   at Program.<<Main>$>g__HandleRequest|0_1(HttpContext context) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 14
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.ExecutionContextCallback(Object s)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext()
   at System.Runtime.CompilerServices.YieldAwaitable.YieldAwaiter.c.b__6_0(Action innerContinuation, Task continuationIdTask)
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke()
   at System.Runtime.CompilerServices.YieldAwaitable.YieldAwaiter.RunAction(Object state)
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

There’s nothing about ASP.NET Core in there at all… so I wonder if we can take that out of the equation too?

Reproducing the crash in a pure console app

To recap, I’m expecting at this stage that to reproduce the crash I should:

  • Write an async void method that throws an exception
  • Call that method from a regular async method

Let’s try:

#pragma warning disable CS1998 // Async method lacks 'await' operators and will run synchronously
await NormalAsyncMethod();
Console.WriteLine("Done");

async Task NormalAsyncMethod()
{
    await Task.Yield();
    Console.WriteLine("Start ofNormalAsyncMethod");
    BrokenAsyncMethod();
    Console.WriteLine("End of NormalAsyncMethod");
}

async void BrokenAsyncMethod()
{
    await Task.Yield();
    throw new Exception("Bang");
}

Hmm. That exits normally:

$ dotnet run
Start ofNormalAsyncMethod
End of NormalAsyncMethod
Done

But maybe there’s a race condition between the main thread finishing and the problem crashing the process? Let’s add a simple sleep:

#pragma warning disable CS1998 // Async method lacks 'await' operators and will run synchronously
await NormalAsyncMethod();
Thread.Sleep(1000);
Console.WriteLine("Done");
// Remainder of code as before

Yup, this time it crashes hard:

Start ofNormalAsyncMethod
End of NormalAsyncMethod
Unhandled exception. System.Exception: Bang
   at Program.<<Main>$>g__BrokenAsyncMethod|0_1() in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 15
   at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state)
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

Interlude: what about async Task?

At this point I’m remembering some of what I’ve learned about how async void methods handle exceptions. What happens if we turn it into an async Task method instead? At that point, the Task returned by the method (which we ignore) will have the exception, and as by default unobserved task exceptions no longer crash the process, maybe we’ll be okay. So just changing BrokenAsyncMethod to:

async Task BrokenAsyncMethod()
{
    throw new Exception("Bang");
}

(and ignoring the warning at the call site)… the program no longer crashes. (I could subscribe to TaskScheduler.UnobservedTaskException but I’m not that bothered… I’m pretty convinced it would fire, at least eventually.)

Do all ThreadPool exceptions crash the app?

We don’t need to use async methods to execute code on the thread pool. What happens if we just write a method which throws an exception, and call that from the thread pool?

ThreadPool.QueueUserWorkItem(ThrowException);
Thread.Sleep(1000);
Console.WriteLine("Done");

void ThrowException(object? state)
{
    throw new Exception("Bang!");
}

Yup, that crashes:

Unhandled exception. System.Exception: Bang!
   at Program.<<Main>$>g__ThrowException|0_0(Object state) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 7
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

At this point some readers (if there still are any…) may be surprised that this is a surprise to me. It’s been a long time since I’ve interacted with the thread pool directly, and taking down the process like this feels a little harsh to me. (There are pros and cons, certainly. I’m not trying to argue that Microsoft made the wrong decision here.)

Can we change the ThreadPool behaviour?

Given that we have things like TaskScheduler.UnobservedTaskException, I’d expect there to be something similar for the thread pool… but I can’t see anything. It looks like this is behaviour that changed with .NET 2.0 – back in 1.x, thread pool exceptions didn’t tear down the application.

After a bit more research, I found AppDomain.UnhandledException. This allows us to react to an exception that’s about to take down the application, but it doesn’t let us mark it as “handled”.

Here’s an example:

AppDomain.CurrentDomain.UnhandledException += (sender, args) =>
    Console.WriteLine($"Unhandled exception: {((Exception)args.ExceptionObject).Message}");
ThreadPool.QueueUserWorkItem(ThrowException);
Thread.Sleep(1000);
Console.WriteLine("Done");

void ThrowException(object? state) =>
    throw new Exception("Bang!");

Running this code a few times, I always get output like this:

Unhandled exception: Bang!
Unhandled exception. System.Exception: Bang!
   at Program.<<Main>$>g__ThrowException|0_1(Object state) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 8
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

… but sometimes “Done” is printed too. So I guess there’s some uncertainty about how quickly the AppDomain is torn down.

Hardening At Your Service

Given what I know now, I don’t think I can easily stop the web server for Zoom interactions from terminating if I have a bug – but I can make it easier to find that bug afterwards. I already normally write the log out to a text file when the app exits, but that only happens for an orderly shutdown.

Fortunately, it looks like the AppDomain.UnhandledException is given enough time to write the log out before the process terminates. Temporarily reverting to the broken code allows me to test that – and yes, I get a log with an appropriate critical error.

Conclusion

I knew that async void methods were generally bad, but I hadn’t quite appreciated how dangerous they are, particularly when executed from a thread pool thread.

While I’m not thrilled with the final result, I at least understand it now, and can find similar errors more easily in the future. The “not understanding” part was the main motivation for this blog post – given that I’d already found the immediate bug, I could have just fixed it and ignored the worrying lack of diagnostic information… but I always find it tremendously unsettling when I can’t explain significant behaviour. It’s not always worth investigating immediately but it’s generally useful to come back to it later on and keep diving deeper until you’ve got to the bottom of it.

I haven’t put the source code for this blog post in GitHub as there are so many iterations – and because it’s all in the post itself. Shout if you would find it valuable, and I’m happy to add it after all.

Displaying NDI sources on Stream Decks

In the course of my work on our local church A/V system, I’ve spent quite a lot of time playing with Elgato Stream Decks and NDI cameras. It only occurred to me a week or so ago that it would be fun to combine them.

The Stream Deck screens are remarkably capable – they’re 96×96 or 72×72 pixels (depending on the model) and appear to have a decent refresh rate. I tend to think of them just in terms of displaying simple icons or text, but I’d already seen a video of a Stream Deck displaying a video so it wasn’t too much of a leap to display live camera outputs instead.

I should make it clear that I had absolutely no practical reason for doing this – although as it happens, the local tweaks I’ve made to the C# NDI SDK have proved useful in At Your Service for enabling a preview camera view without opening a second NDI network stream.

Tweaking the NDI SDK

The NDI SDK comes with a demo C# sample library. It’s not clear how much this is intended to be for production use, but we’ve been using it in At Your Service for quite a while with very few issues. (We’re pinned at a slightly old version of the runtime due to some smoothness issues with later versions, which I’m hoping the NDI folks will sort out at some point – I’ve given them a diagnostic program to show the issues.)

The SDK comes with a WPF “receive view” which displays a camera in a WPF control, as you’d expect. Now I don’t know enough about WPF to try to use the Stream Deck as an output device for WPF directly. It may be feasible, but it’s beyond my current skillset. However, I was able to reuse parts of the receive view to create a new “display-agnostic” class, effectively just dealing with the fiddly bits of the NDI interop layer and providing video frames (already marshalled to the dispatcher thread) as raw data via an event handler. Multiple consumers can then subscribe to that event and process the frames as they wish – in this case, drawing them to the Stream Deck, but potentially delivering them to multiple displays at the same time.

The simplest image processing imaginable

So, the frames are being delivered to my app in a raw 4-bytes-per-pixel format. How do we draw them on the Stream Deck? I’ve been using StreamDeckSharp which is has been pretty simple and reliable. It has multiple ways of drawing on buttons, but the one we’re interested in here is passing in raw byte arrays in a 3-bytes-per-pixel format. (As it happens, the Stream Deck SDK then needs to encode each frame on each button as a JPEG. It’s a pity that the data can’t be transferred to the Stream Deck in raw form, but hey.) All we need to do is scale the image appropriately.

Again, there may be better ways of doing this, such as asking WPF to write the full frame to a Graphics and performing appropriate resizing along the way. But I figured I’d start off with something simpler – and it turned out to be perfectly adequate. I’m not massively concerned with obtaining the absolute best quality image possible, so rather than using all the pixels and blending them appropriately, I’m just taking a sampling approach: if I’m displaying the original 1920×1080 image on a 96×96 button, I’ll just take 96×96 pixel values with as simple code as I could work out.

In the end, I decided to stick to integer scaling factors and just center the image. So to take the above example, 1080/96 is 11.25, so I take samples that are 11 pixels apart in the original image, trimming the top/bottom borders a little bit (because of the truncation from 11.25 to 11) and the left/right borders a lot (because the buttons are square, and it’s better to crop the image than to letter-box it). What happens if you’ve got an image which is just white dots that are 11 pixels apart, and black everywhere else? You end up with a white image (assuming the dots are aligned with the sampling) – but in a camera image, that doesn’t really happen.

The maths ends up being slightly more fiddly when displaying a frame across several buttons. There are are gaps between the buttons, and while we could just ignore them, it looks very weird if you do – if a straight line crosses multiple buttons, it ends up looking “broken” due to the gap, for example. Instead, if we take the gaps into account as if we were drawing them, it looks fine. The arithmetic here isn’t actually hard in any sense of the word – but it still took me a few goes to get right.

Putting it together

Of course, Stream Deck buttons aren’t just screens: they’re pressable buttons as well. I haven’t gone to down on the user interface here: the left-most column just buttons just displays the NDI sources, and pressing one of those buttons displays that source on the rest of the buttons. That part was really simple – as you’d probably expect it to be.

The initial prototype took about two or three hours to write (bearing in mind I already had experience with both the NDI code and the Stream Deck SDK), and then it took another couple of hours to polish it up a bit and get it ready for publication. The application code is all available on my DemoCode GitHub repository; unfortunately even if you happen to have an NDI camera and a Stream Deck, you won’t be able to use it without my tweaks to the NDI C# code. (If the C# part of the NDI SDK is ever made open source, I’ll happily fork it and include my tweaks there.)

I’ve put a YouTube video of the results. Hope you enjoy them. I’ve had a lot of fun with this project, despite the lack of practical applications for it.

Diagnosing a VISCA camera issue

As I have mentioned before, I’ve been spending a lot of time over the last two years writing code for my local church’s A/V system. (Indeed, I’ve been giving quite a few user group talks recently about the fun I’ve had doing so.) That new A/V system is called “At Your Service”, or AYS for short. I only mention that because it’s simpler to refer to AYS for the rest of this post than “the A/V app”.

Our church now uses three cameras from PTZOptics. They’re all effectively the same model: 30x optical Zoom, Power over Ethernet, with NDI support (which I use for the actual video capture – more about that in a subsequent blog post). I control the cameras using VISCA over TCP/IP, as I’ve blogged about before. At home I have one PTZOptics camera (again, basically the same camera) and a Minrray UV5120A – also 30x optical zoom with NDI support.

Very occasionally, the cameras run into problems: sometimes the VISCA interface comes up, but there’s no NDI stream; sometimes the camera just doesn’t stop panning after I move it. In the early days, when I would only send a VISCA command when I wanted to actually do something, I believe the TCP/IP connection was closed automatically after being idle for an hour.

For these reasons, I now have two connections per camera:

  • One for the main control, which includes a “heartbeat” command of “get current pan/tilt/zoom” issued every three minutes.
  • One for frequent status checking, so that we can tell if the camera itself is online even if the main control connection ends up causing problems. This sends a “get power status” every 5 seconds.

If anything goes wrong, the user can remotely reboot the camera (which creates a whole new TCP/IP connection just in case the existing ones are broken; obviously this doesn’t help if the camera is completely failing in network terms).

This all works pretty well – but my logs sometimes show an error response from the camera – equivalent to an HTTP 400, basically saying “the VISCA command you sent has incorrect syntax”. Yesterday I decided to dedicate some time to working out what was going on, and that’s the topic of this blog post. The actual problem isn’t terribly important – I’d be surprised if more than a handful of readers (if any!) faced the same issue. But I’m always interested in the diagnostic process.

Hack AYS to add more stress

I started off by modifying AYS to make it send more commands to the camera, in two different ways:

  • I modified the frequency of the status check and heartbeat
  • I made each of those also send 100 commands instead of just one
    • First sending the 100 commands one at a time
    • Then changed to sending them as 100 tasks started at the same time and then awaited as a bunch

This increased the rate of error – but not in any nicely reproducible way.

Looking back, I probably shouldn’t have started here. AYS does a lot of other stuff, including quite a lot of initialization on startup (VLC, OBS, Zoom) which makes the feedback loop a lot slower when experimenting. I needed something that was dedicated to just provoking the problem.

Create a dedicated stress test app

It’s worth noting at this point that I’ve got a library (source here) that I’m using to handle the TCP connections and send the commands. This keeps all the knowledge of exactly how VISCA works away from AYS, and makes it reasonably easy to write test code separately from the app. However, it does mean that when something goes wrong, my first expectation is that this is a problem in the library. I always assume my own code is broken before placing the blame elsewhere… and usually that assumption is well-founded.

Despite the changes to AYS not provoking things as much as I’d expected, I still thought concurrency would be the key. How could we overload the camera?

I wrote a simple stress test app to send lots of simple “get power status” requests at the camera, from multiple tasks, and record the number of successful commands vs ones which failed for various different reasons (camera responded with an error; camera violated VISCA itself; command timed out). Initially, I found that some tasks had issued far more requests than others – this turned out to be due to the way that the thread pool (used by tasks in a console app) starts small and gradually expands. A significant chunk of the stress test app is given over to getting all the tasks “ready to go” and releasing them all at the same time.

The stress test app is configurable in a number of ways:

  • The camera IP address and VISCA port
  • Whether to use a single controller object (which means a single TCP connection) or one per task. (The library queues concurrent requests, so any given TCP connection only has a single active command at a time.)
  • How long each task should delay between requests
  • How long each command should wait before timing out
  • How long the test should last overall

At first, the results seemed all over the place. With a single controller, and enough of a delay (or few enough tasks) to avoid commands timing out due to the queuing, I received no errors. With multiple controllers, I saw all kinds of results – some tasks never managing a single request, others with just a few errors, and lots of timeouts.

Oh, and if I overwhelmed the camera too much, it would just reset itself during the test (which involves it tilting up to the ceiling then back down to horizontal). Not quite “halt and catch fire”, but a very physically visible indication of an error. I’m pretty confident I haven’t actually damaged the camera though, and it seems fairly reasonable for it to reset itself in the face of a sort of DoS attack like this stress test.

However, I did spot that I never had a completely clean test with more than one task. I wondered: how low could I take the load and still see problems?

Even with just two tasks, and a half second delay between attempts, I’d reliably see one task have a single error response, and the other task have a command timeout. Aha – we’re getting somewhere.

On a hunch (so often the case with diagnostics) I added console output in the catch blocks to include the time at which the exception was thrown. As I’d started to suspect, it was the very first command that failed, and it failed for both tasks. Now we’re getting somewhere.

What’s special about the first command? We’ve carefully set the system up to start all the tasks sending commands at exactly the same time (as near as the scheduler will allow, anyway) – they may well get out of sync after the first command, as different responses take slightly different lengths of time to come back, but the first commands should be sent pretty much simultaneously.

At this point, I broke out Wireshark – an excellent network protocol analyzer, allowing me to see individual TCP packets. Aside from anything else, this could help to rule out client-side issues: if the TCP packets I sent looked correct, the code sending them was irrelevant.

Sure enough, I saw:

  • Two connections being established
  • The same 5 bytes (the VISCA command: 81 09 04 00 ff) being sent on each connection in two packets about 20 microseconds apart.
  • A 4 byte response (a VISCA error: 90 60 02 ff) being sent down the first connection 10ms later
  • No data being sent down the second connection

That explained the “error on one, timeout on the other” behaviour I’d observed at the client side.

Test the other camera

Now that I could reliably reproduce the bug, I wondered – what does the Minrray camera do?

I expected it to have the same problem. I’m aware that there are a number of makes of camera that use very similar hardware – PTZOptics, Minrray, SMTAV, Zowietek for example. While they all have slightly different firmware and user interfaces, I expected that the core functionality of the camera would be common code.

I was wrong I have yet to reproduce the issue with the Minrray camera – even throwing 20 tasks at it, with only a millisecond delay between commands, I’ve seen no errors.

Report the bug

So, it looks like this may be a bug in the PTZOptics firmware. At first I was reluctant to go to the trouble of reporting the bug, given that it’s a real edge case. (I suspect relatively few people using these cameras connect more than one TCP stream to them over VISCA.) However, I received encouragement on Twitter, and I’ve had really good support experiences with PTZOptics, so I thought it worth having a go.

While my stress test app is much simpler than AYS, it’s definitely not a minimal example to demonstrate the problem. It’s 100 lines long and requires a separate library. Fortunately, all of the code for recording different kinds of failures, and starting tasks to loop for a while, etc – that’s all unnecessary when trying to just demonstrate the problem. Likewise I don’t need to worry about queuing commands on a single connection if I’m only sending a single command down each; I don’t need any of the abstraction that my library contains.

Instead, I boiled it down to this – 30 lines (of which several are comments or whitespace) that just sends two commands “nearly simultaneously” on separate connections and shows the response returned on the the first connection.

using System.Net.Sockets;

string host = "192.168.1.45";
int port = 5678;

byte[] getPowerCommand = { 0x81, 0x09, 0x04, 0x00, 0xff };

// Create two clients - these will connect immediately
var client1 = new TcpClient(host, port);
client1.NoDelay = true;
var stream1 = client1.GetStream();

var client2 = new TcpClient(host, port);
client2.NoDelay = true;
var stream2 = client2.GetStream();

// Write the "get power" command to both sockets as close to simultaneously as we can
stream1.Write(getPowerCommand);
stream2.Write(getPowerCommand);

// Read the response from the first client
var buffer = new byte[10];
int bytesRead = stream1.Read(buffer);
// Print it out in hex.
// This is an error: 90-60-02-FF
// This is success: 90-50-02-FF
Console.WriteLine(BitConverter.ToString(buffer[..bytesRead]));

// Note: this sample doesn't read from stream2, but basically when the bug strikes,
// there's nothing to read: the camera doesn't respond.

Simple but effectively – it reliably reproduces the error on the PTZOptics camera, and shows no problems on the Minrray.

I included that with the bug report, and have received a response from PTZOptics already, saying they’re looking into it. I don’t expect a new firmware version with a fix in any time soon – but I hope it might be in their next firmware update… and at least now I can test that easily.

Conclusion

This was classic diagnostic work:

  • Go from complex code to simple code
  • Try lots of configurations to try to make sense of random-looking data
  • Follow hunches and get more information
  • Use all the tools at your disposal (Wireshark in this case) to isolate the problem as far as possible
  • Once you understand the problem (even if you can’t fix it), write code designed specifically to reproduce it simply

Hope you found this as interesting as I did! Next (this weekend, hopefully): displaying the video output of these cameras on a Stream Deck…

What’s up with TimeZoneInfo on .NET 6? (Part 1)

.NET 6 was released in November 2021, and includes two new types which are of interest to date/time folks: DateOnly and TimeOnly. (Please don’t add comments saying you don’t like the names.) We want to support these types in Noda Time, with conversions between DateOnly and LocalDate, and TimeOnly and LocalTime. To do so, we’ll need a .NET-6-specific target.

Even as a starting point, this is slightly frustrating – we had conditional code differentiating between .NET Framework and PCLs for years, and we finally removed it in 2018. Now we’re having to reintroduce some. Never mind – it can’t be helped, and this is at least simple conditional code.

Targeting .NET 6 requires the .NET 6 SDK of course – upgrading to that was overdue anyway. That wasn’t particularly hard, although it revealed some additional nullable reference type warnings that needed fixing, almost all in tests (or in IComparable.Compare implementations).

Once everything was working locally, and I’d updated CI to use .NET 6 as well, I figured I’d be ready to start on the DateOnly and TimeOnly support. I was wrong. The pull request intending just to support .NET 6 with absolutely minimal changes failed its unit tests in CI running on Linux. There were 419 failures out of a total of 19334. Ouch! It looked like all of them were in BclDateTimeZoneTest – and fixing those issues is what this post is all about.

Yesterday (at the time of writing – by the time this post is finished it may be in the more distant past) I started trying to look into what was going on. After a little while, I decided that this would be worth blogging about – so most of this post is actually written as I discover more information. (I’m hoping that folks find my diagnostic process interesting, basically.)

Time zones in Noda Time and .NET

Let’s start with a bit of background about how time zones are represented in .NET and Noda Time.

In .NET, time zones are represented by the TimeZoneInfo class (ignoring the legacy TimeZone class). The data used to perform the underlying calculation of “what’s the UTC offset at a given instant in this time zone” are exposed via the GetAdjustmentRules() method, returning an array of the nested TimeZoneInfo.AdjustmentRule class. TimeZoneInfo instances are usually acquired via either TimeZoneInfo.FindSystemTimeZoneById(), TimeZoneInfo.GetSystemTimeZones(), or the TimeZoneInfo.Local static property. On Windows the information is populated from the Windows time zone database (which I believe is in the registry); on Linux it’s populated from files, typically in the /usr/share/zoneinfo directory. For example, the file /usr/share/zoneinfo/Europe/London file contains information about the time zone with the ID “Europe/London”.

In Noda Time, we separate time zones from their providers. There’s an abstract DateTimeZone class, with one public derived class (BclDateTimeZone) and various internal derived classes (FixedDateTimeZone, CachedDateTimeZone, PrecalculatedDateTimeZone) in the main NodaTime package. There are also two public implementations in the NodaTime.Testing package. Most code shouldn’t need to use anything other than DateTimeZone – the only reason BclDateTimeZone is public is to allow users to obtain the original TimeZoneInfo instance that any given BclDateTimeZone was created from.

Separately, there’s an IDateTimeZoneProvider interface. This only has a single implementation normally: DateTimeZoneCache. That cache makes that underlying provider code simpler, as it only has to implement IDateTimeZoneSource (which most users never need to touch). There are two implementations of IDateTimeZoneSource: BclDateTimeZoneSource and TzdbDateTimeZoneSource. The BCL source is for interop with .NET: it uses TimeZoneInfo as a data source, and basically adapts it into a Noda Time representation. The TZDB source implements the IANA time zone database – there’s a “default” set of data built into Noda Time, but you can also load specific data should you need to. (Noda Time uses the term “TZDB” everywhere for historical reasons – when the project started in 2009, IANA wasn’t involved at all. In retrospect, it would have been good to change the name immediately when IANA did get involved in 2011 – that was before the 1.0 release in 2012.)

This post is all about how BclDateTimeZone handles the adjustment rules in TimeZoneInfo. Unfortunately the details of TimeZoneInfo.AdjustmentRule have never been very clearly documented (although it’s better now – see later), and I’ve blogged before about their strange behaviour. The source code for BclDateTimeZone has quite a few comments explaining “unusual” code that basically tries to make up for this. Over the course of writing this post, I’ll be adding some more.

Announced TimeZoneInfo changes in .NET 6

I was already aware that there might be some trouble brewing in .NET 6 when it came to Noda Time, due to enhancements announced when .NET 6 was released. To be clear, I’m not complaining about these enhancements. They’re great for the vast majority of users: you can call TimeZoneInfo.FindSystemTimeZoneById with either an IANA time zone ID (e.g. “Europe/London”) or a Windows time zone ID (e.g. “GMT Standard Time” for the UK, even when it’s not on standard time) and it will return you the “right” time zone, converting the ID if necessary. I already knew I’d need to check what Noda Time was doing and exactly how .NET 6 behaved, to avoid problems.

I suspect that the subject of this post is actually caused by this change though:

Two other minor improvements were made to how adjustment rules are populated from IANA data internally on non-Windows operating systems. They don’t affect external behavior, other than to ensure correctness in some edge cases.

Ensure correctness, eh? They don’t affect external behavior? Hmm. Given what I’ve already seen, I’m pretty sure I’m going to disagree with that assessment. Still, let’s plough on.

Getting started

The test errors in CI (via GitHub actions) seemed to fall into two main buckets, on a very brief inspection:

  • Failure to convert a TimeZoneInfo into a BclDateTimeZone at all (BclDateTimeZone.FromTimeZoneInfo() throwing an exception)
  • Incorrect results when using a BclDateTimeZone that has been converted. (We validate that BclDateTimeZone gives the same UTC offsets as TimeZoneInfo around all the transitions that we’ve detected, and we check once a week for about 100 years as well, just in case we missed any transitions.)

The number of failures didn’t bother me – this is the sort of thing where a one-line change can fix hundreds of tests. But without being confident of where the problem was, I didn’t want to start a “debugging via CI” cycle – that’s just awful.

I do have a machine that can dual boot into Linux, but it’s only accessible when I’m in my shed (as opposed to my living room or kitchen), making it slightly less convenient for debugging than my laptop. But that’s not the only option – there’s WSL 2 which I hadn’t previously looked at. This seemed like the perfect opportunity.

Installing WSL 2 was a breeze, including getting the .NET 6 SDK installed. There’s one choice I’ve made which may or may not be the right one: I’ve cloned the Noda Time repo within Linux, so that when I’m running the tests there it’s as close to being on a “regular” Linux system as normal. I can still use Visual Studio to edit the files (via the WSL mount point of \\wsl.localhost), but it’ll be slightly fiddly to manage. The alternative would be to avoid cloning any of the source code within the Linux file system, instead running the tests from WSL against the source code on the Windows file system. I may change my mind over the best approach half way through…

First, the good news: running the tests against the netcoreapp3.1 target within WSL 2, everything passed first time. Hooray!

Now the bad news: I didn’t get the same errors in WSL 2 that I’d seen in CI. Instead of 419, there were 1688! Yikes. They were still all within BclDateTimeZoneTest though, so I didn’t investigate that discrepancy any further – it may well be a difference in terms of precise SDK versions, or Linux versions. We clearly want everything to work on WSL 2, so let’s get that working first and see what happens in CI. (Part of me does want to understand the differences, to avoid a situation where the tests could pass in CI but not in WSL 2. I may come back to that later, when I understand everything more.)

First issue: abutting maps

The first exception reported in WSL 2 – accounting for the majority of errors – was a conversion failure:

NodaTime.Utility.DebugPreconditionException : Maps must abut (parameter name: maps)

The “map” in question is a PartialZoneIntervalMap, which maps instants to offsets over some interval of time. A time zone (at least for BclDateTimeZone) is created from a sequence of PartialZoneIntervalMaps, where the end of map n is the start of map n+1. The sequence has to cover the whole of time.

As it happens, by the time I’m writing this, I know what the immediate problem is here (because I fixed it last night, before starting to write this blog post) but in the interests of simplicity I’m going to effectively ignore what I did last night, beyond this simplified list:

  • I filtered the tests down to a single time zone (to get a smaller log)
  • I added more information to the exception (showing the exact start/end that were expected to be the same)
  • I added Console.WriteLine logging to BclDateTimeZone construction to dump a view of the adjustment rules
  • I observed and worked around an oddity that we’ll look at shortly

Looking at this now, the fact that it’s a DebugPreconditionException makes me wonder whether this is the difference between CI and local failures: for CI, we run in release mode. Let’s try running the tests in release mode… and yes, we’re down to 419 failures, the same as for CI! That’s encouraging, although it suggests that I might want CI to run tests in debug as well as release mode – at least when the main branch has been updated.

Even before the above list of steps, it seemed likely that the problems would be due changes in the adjustment rule representation in TimeZoneInfo. So at this point, let’s take a steps back and look at what’s meant to be in an adjustment rule, and what we observe in both .NET Core 3.1 and .NET 6.

What’s in an AdjustmentRule?

An adjustment rule covers an interval of time, and describes how the time zone behaves during that interval. (A bit like the PartialZoneIntervalMap mentioned above.)

Let’s start with some good news: it looks like the documentation for TimeZoneInfo.AdjustmentRule has been improved since I last looked at it. It has 6 properties:

  • BaseUtcOffsetDelta: this is only present in .NET 6, and indicates the difference between “the UTC offset of Standard Time returned by TimeZoneInfo.BaseUtcOffset” and “the UTC offset of Standard Time when this adjustment rule is active”. Effectively this makes up for Windows time zones historically not being able to represent the concept of a zone’s standard time changing.
  • DateStart/DateEnd: the date interval during which the rule applies.
  • DaylightDelta: the delta between standard time and daylight time during this rule. This is typically one hour.
  • DaylightTransitionStart/DaylightTransitionEnd: the information about when the time zone starts and ends daylight saving time (DST) while this rule is in force.

Before we go into the details of DST, there are two “interesting” aspects to DateStart/DateEnd:

Firstly, the documentation doesn’t say whether the rule applies between those UTC dates, or those local dates. I believe they’re local – but that’s an awkard way of specifying things, as local date/time values can be skipped or ambiguous. I really wish this has been set to UTC, and documented as such. Additionally, although you’d expect the transition from one rule to the next to be at midnight (given that only the start/end are only dates), the comments in my existing BclDateTimeZone code suggest that it’s actually at a time of day that depends on the DST transitions times. (It’s very possible that my code is wrong, of course. We’ll look at that in a bit.)

Secondly, the documentation includes this interesting warning (with an example which I’ve snipped out):

Unless there is a compelling reason to do otherwise, you should define the adjustment rule’s start date to occur within the time interval during which the time zone observes standard time. Unless there is a compelling reason to do so, you should not define the adjustment rule’s start date to occur within the time interval during which the time zone observes daylight saving time.

Why? What is likely to go wrong if you violate this? This sort of “here be dragons, but only vaguely specified ones” documentation always feels unhelpful to me. (And yes, I’ve probably written things like that too…)

Anyway, let’s look at the TimeZoneInfo.TransitionTime struct, which is the type of DaylightTransitionStart and DaylightTransitionEnd. The intention is to be able to represent ideas like “3am on February 25th” or “2am on the third Sunday in October”. The first of these is a fixed date rule; the second is a floating date rule (because the day-of-month of “the third Sunday in October” depends on the year). TransitionTime is a struct with 6 properties:

  • IsFixedDateRule: true for fixed date rules; false for floating date rules
  • Day (only relevant for fixed date rules): the day-of-month on which the transition occurs
  • DayOfWeek (only relevant for floating date rules): the day-of-week on which the transition occurs
  • Week (only relevant for floating date rules): confusingly, this isn’t really “the week of the month” on which the transition occurs; it’s “the occurrence of DayOfWeek on which the transition occurs”. (The idea of a “Monday to Sunday” or “Sunday to Saturday” week is irrelevant here; it’s just “the first Sunday” or “the second Sunday” etc.) If this has a value of 5, it means “last” regardless of whether that’s the fourth or fifth occurrence.
  • Month the month of year in which the transition occurs
  • TimeOfDay: the local time of day prior to the transition, at which the transition occurs. (So for a transition that skips forward from 1am to 2am for example, this would be 1am. For a transition that skips back from 2am to 1am, this would be 2am.)

Let’s look at the data

From here on, I’m writing and debugging at the same time – any stupid mistakes I make along the way will be documented. (I may go back to indicate that it turned out an idea was stupid at the start of that idea, just to avoid anyone else following it.)

Rather than trying to get bogged down in what the existing Noda Time implementation does, I think it would be useful to compare the data for the same time zone in Windows and Linux, .NET Core 3.1 and .NET 6.

Aha! It looks like I’ve had this idea before! The tool already exists as NodaTime.Tools.DumpTimeZoneInfo. I just need to target it for .NET 6 as well, and add the .NET-6-only BaseUtcOffsetDelta property the completeness.

Interlude: WSL 2 root file issues

Urgh. For some reason, something (I suspect it’s Visual Studio or a background process launched by it, but I’m not sure) keeps on creating files (or modifying existing files) so they’re owned by the root user on the Linux file system. Rather than spending ages investigating this, I’m just going to switch to the alternative mode: use my existing git repo on the Windows file system, and run the code that’s there from WSL when I need to.

(I’m sure this is all configurable and feasible; I just don’t have the energy right now.)

Back to the data…

I’m going to use London as my test time zone, mostly because that’s the time zone I live in, but also because I know it has an interesting oddity between 1968 and 1971, where the UK was on “British Standard Time” – an offset of UTC+1, like “British Summer Time” usually is, but this was “permanent standard time”. In other words, for a few years, our standard UTC offset changed. I’m expecting that to show up in the BaseUtcOffsetDelta property.

So, let’s dump some of the data for the Europe/London time zone, with both .NET Core 3.1 and .NET 6. The full data is very long (due to how the data is represented in the IANA binary format) but here are interesting portions of it, including the start, the British Standard Time experiment, this year (2022) and the last few lines:

.NET Core 3.1:

Zone ID: Europe/London
Display name: (UTC+00:00) GMT
Standard name: GMT
Daylight name: GMT+01:00
Base offset: 00:00:00
Supports DST: True
Rules:
0001-01-01 - 1847-12-01: Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 01 at 00:01:14
1847-12-01 - 1916-05-21: Daylight delta: +00; DST starts December 01 at 00:01:15 and ends May 21 at 01:59:59
1916-05-21 - 1916-10-01: Daylight delta: +01; DST starts May 21 at 02:00:00 and ends October 01 at 02:59:59
1916-10-01 - 1917-04-08: Daylight delta: +00; DST starts October 01 at 02:00:00 and ends April 08 at 01:59:59
...
1967-03-19 - 1967-10-29: Daylight delta: +01; DST starts March 19 at 02:00:00 and ends October 29 at 02:59:59
1967-10-29 - 1968-02-18: Daylight delta: +00; DST starts October 29 at 02:00:00 and ends February 18 at 01:59:59
1968-02-18 - 1968-10-26: Daylight delta: +01; DST starts February 18 at 02:00:00 and ends October 26 at 23:59:59
1968-10-26 - 1971-10-31: Daylight delta: +00; DST starts October 26 at 23:00:00 and ends October 31 at 01:59:59
1971-10-31 - 1972-03-19: Daylight delta: +00; DST starts October 31 at 02:00:00 and ends March 19 at 01:59:59
1972-03-19 - 1972-10-29: Daylight delta: +01; DST starts March 19 at 02:00:00 and ends October 29 at 02:59:59
1972-10-29 - 1973-03-18: Daylight delta: +00; DST starts October 29 at 02:00:00 and ends March 18 at 01:59:59
...
2022-03-27 - 2022-10-30: Daylight delta: +01; DST starts March 27 at 01:00:00 and ends October 30 at 01:59:59
2022-10-30 - 2023-03-26: Daylight delta: +00; DST starts October 30 at 01:00:00 and ends March 26 at 00:59:59
...
2036-03-30 - 2036-10-26: Daylight delta: +01; DST starts March 30 at 01:00:00 and ends October 26 at 01:59:59
2036-10-26 - 2037-03-29: Daylight delta: +00; DST starts October 26 at 01:00:00 and ends March 29 at 00:59:59
2037-03-29 - 2037-10-25: Daylight delta: +01; DST starts March 29 at 01:00:00 and ends October 25 at 01:59:59
2037-10-25 - 9999-12-31: Daylight delta: +01; DST starts October 25 at 01:00:00 and ends December 31 at 23:59:59

.NET 6:

Zone ID: Europe/London
Display name: (UTC+00:00) United Kingdom Time
Standard name: Greenwich Mean Time
Daylight name: British Summer Time
Base offset: 00:00:00
Supports DST: True
Rules:
0001-01-01 - 0001-12-31: Base UTC offset delta: -00:01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
0002-01-01 - 1846-12-31: Base UTC offset delta: -00:01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
1847-01-01 - 1847-12-01: Base UTC offset delta: -00:01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 01 at 00:01:14.999
1916-05-21 - 1916-10-01: Daylight delta: +01; DST starts May 21 at 02:00:00 and ends October 01 at 02:59:59.999
1917-04-08 - 1917-09-17: Daylight delta: +01; DST starts April 08 at 02:00:00 and ends September 17 at 02:59:59.999
1918-03-24 - 1918-09-30: Daylight delta: +01; DST starts March 24 at 02:00:00 and ends September 30 at 02:59:59.999
...
1967-03-19 - 1967-10-29: Daylight delta: +01; DST starts March 19 at 02:00:00 and ends October 29 at 02:59:59.999
1968-02-18 - 1968-10-26: Daylight delta: +01; DST starts February 18 at 02:00:00 and ends October 26 at 23:59:59.999
1968-10-26 - 1968-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts October 26 at 23:00:00 and ends December 31 at 23:59:59.999
1969-01-01 - 1970-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
1971-01-01 - 1971-10-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends October 31 at 01:59:59.999
1972-03-19 - 1972-10-29: Daylight delta: +01; DST starts March 19 at 02:00:00 and ends October 29 at 02:59:59.999
1973-03-18 - 1973-10-28: Daylight delta: +01; DST starts March 18 at 02:00:00 and ends October 28 at 02:59:59.999
...
2022-03-27 - 2022-10-30: Daylight delta: +01; DST starts March 27 at 01:00:00 and ends October 30 at 01:59:59.999
...
2037-03-29 - 2037-10-25: Daylight delta: +01; DST starts March 29 at 01:00:00 and ends October 25 at 01:59:59.999
2037-10-25 - 9999-12-31: Daylight delta: +01; DST starts Last Sunday of March; 01:00:00 and ends Last Sunday of October; 02:00:00

Wow… that’s quite a difference. Let’s see:

  • The names (display/standard/daylight) are all different – definitely better in .NET 6.
  • .NET 6 appears to have one rule for the year 1, and then another (but identical) for years 2 to 1846
  • .NET 6 doesn’t have any rules between 1847 and 1916
  • .NET 6 only uses one rule per year, starting and ending at the DST boundaries; .NET Core 3.1 had one rule for each transition
  • The .NET Core 3.1 rules end at 59 minutes past the hour (e.g. 01:59:59) whereas the .NET 6 rules finish 999 milliseconds later

Fixing the code

So my task is to “interpret” all of this rule data in Noda Time, bearing in mind that:

  • It needs to work with Windows data as well (which has its own quirks)
  • It probably shouldn’t change logic based on which target framework it was built against, as I suspect it’s entirely possible
    for the DLL targeting .NET Standard 2.0 to end up running in .NET 6.

We do already have code that behaves differently based on whether it believes
the rule data comes from Windows or Unix – Windows rules always start on January 1st and end on December 31st, so if all
the rules in a zone follow that pattern, we assume we’re dealing with Windows data. That makes it slightly easier.

Likewise, we already have code that assumes any gaps between rules are in standard time – so actually the fact that .NET 6 only reports half as many rules probably won’t cause a problem.

Let’s start by handling the difference of transitions finishing at x:59:59 vs x:59:59.999. The existing code always adds 1 second to the end time, to account for x:59:59. It’s easy enough to adjust that to add either 1 second or 1 millisecond. This error was what caused our maps to have problems, I suspect. (We’d have a very weird situation in a few cases where one map started after the previous one ended.)

// This is added to the declared end time, so that it becomes an exclusive upper bound.
var endTimeCompensation = Duration.FromSeconds(1) - Duration.FromMilliseconds(bclLocalEnd.Millisecond);

Let’s try it: dotnet test -f net6.0

Good grief. Everything passed. Better try it with 3.1 as well: dotnet test -f netcoreapp3.1

Yup, everything passed there, too. And on Windows, although that didn’t surprise me much, given that we have separate paths.

This surprises me for two reasons:

  • Last night, when just experimenting, I made a change to just subtract bclLocalEnd.Millisecond milliseconds from bclLocalEnd (i.e. truncate it down). That helped a lot, but didn’t fix everything.
  • The data has changed really quite substantially, so I’m surprised that there aren’t extra issues. Do we get the “standard offset” correct during the British Standard Time experiment, for example?

I’m somewhat suspicious of the first bullet point… so I’m going to stash the fix, and try to reproduce last night.

Testing an earlier partial fix (or not…)

First, I remember that I did something I definitely wanted to keep last night. When adjacent maps don’t abut, let’s throw a better exception.

So before I do anything else, let’s reproduce the original errors: dotnet test -f net6.0

Ah. It still passes. Doh! When I thought I was running the .NET 6 tests under Linux, it turned out I was still in a Windows tab in Windows Terminal. (I use bash in all my terminals, so there’s not quite as much distinction as you might expect.) Well, that at least explains why the small fix worked rather better than expected. Sigh.

Okay, let’s rerun the tests… and they fail as expected. Now let’s add more details to the exception before reapplying the fix… done.

The resulting exception is clearer, and makes it obvious that the error is due to the 999ms discrepancy:

NodaTime.Utility.DebugPreconditionException : Maps must abut: 0002-01-01T00:00:00.999 != 0002-01-01T00:00:00

Let’s reapply the fix from earlier, which we expect to solve that problem but not everything. Retest… and we’re down to 109 failures rather than 1688. Much better, but not great.

Let’s understand one new error

We’re still getting errors of non-abutting maps, but now they’re (mostly) an hour out, rather than 999ms. Here’s one from Europe/Prague:

NodaTime.Utility.DebugPreconditionException : Maps must abut: 1947-01-01T00:00:00 != 1946-12-31T23:00:00

Most errors are in the 20th century, although there are some in 2038 and 2088, which is odd. Let’s have a look at the raw data for Prague around the time that’s causing problems, and we can see whether fixing just Prague helps with anything else.

.NET 6 data:

1944-04-03 - 1944-10-02: Daylight delta: +01; DST starts April 03 at 02:00:00 and ends October 02 at 02:59:59.999
1945-04-02 - 1945-05-08: Daylight delta: +01; DST starts April 02 at 02:00:00 and ends May 08 at 23:59:59.999
1945-05-08 - 1945-10-01: Daylight delta: +01; DST starts May 08 at 23:00:00 and ends October 01 at 02:59:59.999
1946-05-06 - 1946-10-06: Daylight delta: +01; DST starts May 06 at 02:00:00 and ends October 06 at 02:59:59.999
1946-12-01 - 1946-12-31: Daylight delta: -01; DST starts December 01 at 03:00:00 and ends December 31 at 23:59:59.999
1947-01-01 - 1947-02-23: Daylight delta: -01; DST starts January 01 at 00:00:00 and ends February 23 at 01:59:59.999
1947-04-20 - 1947-10-05: Daylight delta: +01; DST starts April 20 at 02:00:00 and ends October 05 at 02:59:59.999
1948-04-18 - 1948-10-03: Daylight delta: +01; DST starts April 18 at 02:00:00 and ends October 03 at 02:59:59.999
1949-04-09 - 1949-10-02: Daylight delta: +01; DST starts April 09 at 02:00:00 and ends October 02 at 02:59:59.999
1979-04-01 - 1979-09-30: Daylight delta: +01; DST starts April 01 at 02:00:00 and ends September 30 at 02:59:59.999

This is interesting – most years have just one rule, but the three years of 1945-1947 have two rules each.

Let’s look at the .NET Core 3.1 representation – which comes from the same underlying file, as far as I’m aware:

1944-10-02 - 1945-04-02: Daylight delta: +00; DST starts October 02 at 02:00:00 and ends April 02 at 01:59:59
1945-04-02 - 1945-05-08: Daylight delta: +01; DST starts April 02 at 02:00:00 and ends May 08 at 23:59:59
1945-05-08 - 1945-10-01: Daylight delta: +01; DST starts May 08 at 23:00:00 and ends October 01 at 02:59:59
1945-10-01 - 1946-05-06: Daylight delta: +00; DST starts October 01 at 02:00:00 and ends May 06 at 01:59:59
1946-05-06 - 1946-10-06: Daylight delta: +01; DST starts May 06 at 02:00:00 and ends October 06 at 02:59:59
1946-10-06 - 1946-12-01: Daylight delta: +00; DST starts October 06 at 02:00:00 and ends December 01 at 02:59:59
1946-12-01 - 1947-02-23: Daylight delta: -01; DST starts December 01 at 03:00:00 and ends February 23 at 01:59:59
1947-02-23 - 1947-04-20: Daylight delta: +00; DST starts February 23 at 03:00:00 and ends April 20 at 01:59:59
1947-04-20 - 1947-10-05: Daylight delta: +01; DST starts April 20 at 02:00:00 and ends October 05 at 02:59:59
1947-10-05 - 1948-04-18: Daylight delta: +00; DST starts October 05 at 02:00:00 and ends April 18 at 01:59:59
1948-04-18 - 1948-10-03: Daylight delta: +01; DST starts April 18 at 02:00:00 and ends October 03 at 02:59:59
1948-10-03 - 1949-04-09: Daylight delta: +00; DST starts October 03 at 02:00:00 and ends April 09 at 01:59:59
1949-04-09 - 1949-10-02: Daylight delta: +01; DST starts April 09 at 02:00:00 and ends October 02 at 02:59:59
1949-10-02 - 1978-12-31: Daylight delta: +00; DST starts October 02 at 02:00:00 and ends December 31 at 23:59:59
1979-01-01 - 1979-04-01: Daylight delta: +00; DST starts January 01 at 00:00:00 and ends April 01 at 01:59:59

Okay, so that makes a certain amount of sense – it definitely shows that there was something unusual happening in the Europe/Prague time zone. Just as one extra point of data, let’s look at the nodatime.org tzvalidate results – this shows all transitions. (tzvalidate is a format designed to allow authors of time zone library code to validate that they’re interpreting the IANA data the same way as each other.)

Europe/Prague
Initially:           +01:00:00 standard CET
1944-04-03 01:00:00Z +02:00:00 daylight CEST
1944-10-02 01:00:00Z +01:00:00 standard CET
1945-04-02 01:00:00Z +02:00:00 daylight CEST
1945-10-01 01:00:00Z +01:00:00 standard CET
1946-05-06 01:00:00Z +02:00:00 daylight CEST
1946-10-06 01:00:00Z +01:00:00 standard CET
1946-12-01 02:00:00Z +00:00:00 daylight GMT
1947-02-23 02:00:00Z +01:00:00 standard CET
1947-04-20 01:00:00Z +02:00:00 daylight CEST
1947-10-05 01:00:00Z +01:00:00 standard CET
1948-04-18 01:00:00Z +02:00:00 daylight CEST
1948-10-03 01:00:00Z +01:00:00 standard CET
1949-04-09 01:00:00Z +02:00:00 daylight CEST
1949-10-02 01:00:00Z +01:00:00 standard CET

Again there’s that odd period from December 1946 to near the end of February 1947 where there’s daylight savings of -1 hour. I’m not interested in the history of that right now – I’m interested in why the code is failing.

In this particular case, it looks like the problem is we’ve got two adjacent rules in .NET 6 (one at the end of 1946 and the other at the start of 1947) which both just describe periods of daylight saving.

If we can construct the maps to give the right results, Noda Time already has code in to work out “that’s okay, there’s no transition at the end of 1946”. But we need to get the maps right to start with.

Unfortunately, BclDateTimeZone already has complicated code to handle the previously-known corner cases. That makes the whole thing feel quite precarious – I could easily end up breaking other things by trying to fix this one specific aspect. Still, that’s what unit tests are for.

Looking at the code, I suspect the problem is with the start time of the first rule of 1947, which I’d expect to start at 1947-01-01T00:00:00Z, but is actually deemed to start at 1946-12-31T23:00:00Z. (In the course of writing that out, I notice that my improved-abutting-error exception doesn’t include the “Z”. Fix that now…)

Ah… but the UTC start of the rule is currently expected to be “the start date + the transition start time – base UTC offset”. That does give 1946-12-31T23:00:00Z. We want to apply the daylight savings (of -1 hour) in this case, because the start of the rule is during daylight savings. Again, there’s no documentation to say exactly what is meant by “start date” for the rule, and hopefully you can see why it’s really frustrating to have to try to reverse-engineer this in a version-agnostic way. Hmm.

Seeking an unambiguous and independent interpretation of AdjustmentRule

It’s relatively easy to avoid the “maps don’t abut” issue if we don’t care about really doing the job properly. After converting each AdjustmentRule to its Noda Time equivalent, we can look at rule pair of adjacent rules in the sequence: if the start of the “next” rule is earlier than the end of the “previous” rule, we can just adjust the start point. But that’s really just brushing the issue under the carpet – and as it happens, it just moves the exception to a different point.

That approach also requires knowledge of surrounding adjustment rules in order to completely understand one adjustment rule. That really doesn’t feel right to me. We should be able to understand the adjustment rule purely from the data exposed by that rule and the properties for the TimeZoneInfo itself. The code is already slightly grubby by calling TimeZoneInfo.IsDaylightSavingTime(). If I could work out how to remove that call too, that would be great. (It may prove infeasible to remove it for .NET Core 3.1, but feasible in 6. That’s not too bad. Interesting question: if the “grubby” code still works in .NET 6, is it better to use conditional code so that only the “clean” code is used in .NET 6, or avoid the conditional code? Hmm. We’ll see.)

Given that the rules in both .NET Core 3.1 and .NET 6 effectively mean that the start and end points are exactly the start and end points of DST (or other) transitions, I should be able to gather a number of examples of source data and expected results, and try to work out rules from that. In particular, this source data should include:

  • “Simple” situations (partly as a warm-up…)
  • Negative standard time offset (e.g. US time zones)
  • Negative savings (e.g. Prague above, and Europe/Dublin from 1971 onwards)
  • DST periods that cross year boundaries (primarily the southern hemisphere, e.g. America/Sao_Paulo)
  • Zero savings, but still in DST (Europe/Dublin before 1968)
  • Standard UTC offset changes (e.g. Europe/London 1968-1971, Europe/Moscow from March 2011 to October 2014)
  • All of the above for both .NET Core 3.1 and .NET 6, including the rules which represent standard time in .NET Core 3.1 but which are omitted in .NET 6

It looks like daylight periods which cross year boundaries are represented as single rules in .NET Core 3.1 and dual rules in .NET 6, so we’ll need to take that into account. In those cases we’ll need to map to two Noda Time rules, and we don’t mind where the transition between them is, so long as they abut. In general, working out the zone intervals that are relevant for a single year may require multiple lines of data from each source. (But we must be able to infer some of that from gaps, and other parts from individual source rules.)

Fortunately we’re not trying to construct “full rules” within Noda Time – just ZoneInterval values, effectively. All we need to be able to determine is:

  • Start instant
  • End instant
  • Standard offset
  • Daylight savings (if any)

When gathering the data, I’m going to assume that using the existing Noda Time interpretation of the IANA data is okay. That could be dangerous if either .NET interprets the data incorrectly, or if the Linux data isn’t the same as the IANA 2021e data I’m working from. There are ways to mitigate those risks, but they would be longwinded and I don’t think the risk justifies the extra work.

What’s absolutely vital is that the data is gathered carefully. If I mess this up (looking at the wrong time zone, or the wrong year, or running some code on Windows that I meant to run on Linux – like the earlier tests) it could several hours of work. This will be tedious.

Let’s gather some data…

Europe/Paris in 2020:
.NET Core 3.1:
Base offset = 1
2019-10-27 - 2020-03-29: Daylight delta: +00; DST starts October 27 at 02:00:00 and ends March 29 at 01:59:59
2020-03-29 - 2020-10-25: Daylight delta: +01; DST starts March 29 at 02:00:00 and ends October 25 at 02:59:59
2020-10-25 - 2021-03-28: Daylight delta: +00; DST starts October 25 at 02:00:00 and ends March 28 at 01:59:59

.NET 6:
Base offset = 1
2019-03-31 - 2019-10-27: Daylight delta: +01; DST starts March 31 at 02:00:00 and ends October 27 at 02:59:59.999
2020-03-29 - 2020-10-25: Daylight delta: +01; DST starts March 29 at 02:00:00 and ends October 25 at 02:59:59.999
2021-03-28 - 2021-10-31: Daylight delta: +01; DST starts March 28 at 02:00:00 and ends October 31 at 02:59:59.999

Noda Time zone intervals (start - end, standard, savings):
2019-10-27T01:00:00Z - 2020-03-29T01:00:00Z, +1, +0
2020-03-29T01:00:00Z - 2020-10-25T01:00:00Z, +1, +1
2020-10-25T01:00:00Z - 2021-03-28T01:00:00Z, +1, +0


America/Los_Angeles in 2020:

.NET Core 3.1:
Base offset = -8
2019-11-03 - 2020-03-08: Daylight delta: +00; DST starts November 03 at 01:00:00 and ends March 08 at 01:59:59
2020-03-08 - 2020-11-01: Daylight delta: +01; DST starts March 08 at 02:00:00 and ends November 01 at 01:59:59
2020-11-01 - 2021-03-14: Daylight delta: +00; DST starts November 01 at 01:00:00 and ends March 14 at 01:59:59

.NET 6:
Base offset = -8
2019-03-10 - 2019-11-03: Daylight delta: +01; DST starts March 10 at 02:00:00 and ends November 03 at 01:59:59.999
2020-03-08 - 2020-11-01: Daylight delta: +01; DST starts March 08 at 02:00:00 and ends November 01 at 01:59:59.999
2021-03-14 - 2021-11-07: Daylight delta: +01; DST starts March 14 at 02:00:00 and ends November 07 at 01:59:59.999

Noda Time zone intervals:
2019-11-03T09:00:00Z - 2020-03-08T10:00:00Z, -8, +0
2020-03-08T10:00:00Z - 2020-11-01T09:00:00Z, -8, +1
2020-11-01T09:00:00Z - 2021-03-14T10:00:00Z, -8, +0


Europe/Prague in 1946/1947:

.NET Core 3.1:
Base offset = 1
1945-10-01 - 1946-05-06: Daylight delta: +00; DST starts October 01 at 02:00:00 and ends May 06 at 01:59:59
1946-05-06 - 1946-10-06: Daylight delta: +01; DST starts May 06 at 02:00:00 and ends October 06 at 02:59:59
1946-10-06 - 1946-12-01: Daylight delta: +00; DST starts October 06 at 02:00:00 and ends December 01 at 02:59:59
1946-12-01 - 1947-02-23: Daylight delta: -01; DST starts December 01 at 03:00:00 and ends February 23 at 01:59:59
1947-02-23 - 1947-04-20: Daylight delta: +00; DST starts February 23 at 03:00:00 and ends April 20 at 01:59:59
1947-04-20 - 1947-10-05: Daylight delta: +01; DST starts April 20 at 02:00:00 and ends October 05 at 02:59:59
1947-10-05 - 1948-04-18: Daylight delta: +00; DST starts October 05 at 02:00:00 and ends April 18 at 01:59:59
1948-04-18 - 1948-10-03: Daylight delta: +01; DST starts April 18 at 02:00:00 and ends October 03 at 02:59:59

.NET 6:
Base offset = 1
1945-05-08 - 1945-10-01: Daylight delta: +01; DST starts May 08 at 23:00:00 and ends October 01 at 02:59:59.999
1946-05-06 - 1946-10-06: Daylight delta: +01; DST starts May 06 at 02:00:00 and ends October 06 at 02:59:59.999
1946-12-01 - 1946-12-31: Daylight delta: -01; DST starts December 01 at 03:00:00 and ends December 31 at 23:59:59.999
1947-01-01 - 1947-02-23: Daylight delta: -01; DST starts January 01 at 00:00:00 and ends February 23 at 01:59:59.999
1947-04-20 - 1947-10-05: Daylight delta: +01; DST starts April 20 at 02:00:00 and ends October 05 at 02:59:59.999
1948-04-18 - 1948-10-03: Daylight delta: +01; DST starts April 18 at 02:00:00 and ends October 03 at 02:59:59.999

Noda Time zone intervals:
1945-10-01T01:00:00Z - 1946-05-06T01:00:00Z, +1, +0
1946-05-06T01:00:00Z - 1946-10-06T01:00:00Z, +1, +1
1946-10-06T01:00:00Z - 1946-12-01T02:00:00Z, +1, +0
1946-12-01T02:00:00Z - 1947-02-23T02:00:00Z, +1, -1
1947-02-23T02:00:00Z - 1947-04-20T01:00:00Z, +1, +0
1947-04-20T01:00:00Z - 1947-10-05T01:00:00Z, +1, +1
1947-10-05T01:00:00Z - 1948-04-18T01:00:00Z, +1, +0


Europe/Dublin in 2020:

.NET Core 3.1:
Base offset = 1
2019-10-27 - 2020-03-29: Daylight delta: -01; DST starts October 27 at 02:00:00 and ends March 29 at 00:59:59
2020-03-29 - 2020-10-25: Daylight delta: +00; DST starts March 29 at 02:00:00 and ends October 25 at 01:59:59
2020-10-25 - 2021-03-28: Daylight delta: -01; DST starts October 25 at 02:00:00 and ends March 28 at 00:59:59

.NET 6.0:
Base offset = 1
2019-10-27 - 2019-12-31: Daylight delta: -01; DST starts October 27 at 02:00:00 and ends December 31 at 23:59:59.999
2020-01-01 - 2020-03-29: Daylight delta: -01; DST starts January 01 at 00:00:00 and ends March 29 at 00:59:59.999
2020-10-25 - 2020-12-31: Daylight delta: -01; DST starts October 25 at 02:00:00 and ends December 31 at 23:59:59.999
2021-01-01 - 2021-03-28: Daylight delta: -01; DST starts January 01 at 00:00:00 and ends March 28 at 00:59:59.999

Noda Time zone intervals:
2019-10-27T01:00:00Z - 2020-03-29T01:00:00Z, +1, -1
2020-03-29T01:00:00Z - 2020-10-25T01:00:00Z, +1, +0
2020-10-25T01:00:00Z - 2021-03-28T01:00:00Z, +1, -1


Europe/Dublin in 1960:

.NET Core 3.1:
Base offset = 1
1959-10-04 - 1960-04-10: Daylight delta: +00; DST starts October 04 at 03:00:00 and ends April 10 at 02:59:59
1960-04-10 - 1960-10-02: Daylight delta: +00; DST starts April 10 at 03:00:00 and ends October 02 at 02:59:59

.NET 6.0:
Base offset = 1
1959-10-04 - 1959-12-31: Base UTC offset delta: -01; Daylight delta: +00; DST starts October 04 at 03:00:00 and ends December 31 at 23:59:59.999
1960-01-01 - 1960-04-10: Base UTC offset delta: -01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends April 10 at 02:59:59.999
1960-04-10 - 1960-10-02: Daylight delta: +00; DST starts April 10 at 03:00:00 and ends October 02 at 02:59:59.999
1960-10-02 - 1960-12-31: Base UTC offset delta: -01; Daylight delta: +00; DST starts October 02 at 03:00:00 and ends December 31 at 23:59:59.999
1961-01-01 - 1961-03-26: Base UTC offset delta: -01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends March 26 at 02:59:59.999

Noda Time zone intervals:
1959-10-04T02:00:00Z - 1960-04-10T02:00:00Z, +0, +0
1960-04-10T02:00:00Z - 1960-10-02T02:00:00Z, +0, +1
1960-10-02T02:00:00Z - 1961-03-26T02:00:00Z, +0, +0


America/Sao_Paulo in 2018 (not 2020, as Brazil stopped observing daylight savings in 2019):

.NET Core 3.1:
Base offset = -3
2017-10-15 - 2018-02-17: Daylight delta: +01; DST starts October 15 at 00:00:00 and ends February 17 at 23:59:59
2018-02-17 - 2018-11-03: Daylight delta: +00; DST starts February 17 at 23:00:00 and ends November 03 at 23:59:59
2018-11-04 - 2019-02-16: Daylight delta: +01; DST starts November 04 at 00:00:00 and ends February 16 at 23:59:59

.NET 6.0:
Base offset = -3
2017-10-15 - 2017-12-31: Daylight delta: +01; DST starts October 15 at 00:00:00 and ends December 31 at 23:59:59.999
2018-01-01 - 2018-02-17: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends February 17 at 23:59:59.999
2018-11-04 - 2018-12-31: Daylight delta: +01; DST starts November 04 at 00:00:00 and ends December 31 at 23:59:59.999
2019-01-01 - 2019-02-16: Daylight delta: +01; DST starts January 01 at 00:00:00 and ends February 16 at 23:59:59.999

Noda Time zone intervals:
2017-10-15T03:00:00Z - 2018-02-18T02:00:00Z, -3, +1
2018-02-18T02:00:00Z - 2018-11-04T03:00:00Z, -3, +0
2018-11-04T03:00:00Z - 2019-02-17T02:00:00Z, -3, +1


Europe/London in 1968-1971

.NET Core 3.1:
Base offset = 0
1968-10-26 - 1971-10-31: Daylight delta: +00; DST starts October 26 at 23:00:00 and ends October 31 at 01:59:59

.NET 6:
Base offset = 0
1968-10-26 - 1968-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts October 26 at 23:00:00 and ends December 31 at 23:59:59.999
1969-01-01 - 1970-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
1971-01-01 - 1971-10-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends October 31 at 01:59:59.999

Noda Time zone intervals:
1968-10-26T23:00:00Z - 1971-10-31T02:00:00Z, +1, +0


Europe/Moscow in 2011-2014

.NET Core 3.1:
Base offset = 3
2011-03-27 - 2014-10-26: Daylight delta: +00; DST starts March 27 at 02:00:00 and ends October 26 at 00:59:59

.NET 6:
Base offset = 3
2011-03-27 - 2011-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts March 27 at 02:00:00 and ends December 31 at 23:59:59.999
2012-01-01 - 2013-12-31: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends December 31 at 23:59:59.999
2014-01-01 - 2014-10-26: Base UTC offset delta: +01; Daylight delta: +00; DST starts January 01 at 00:00:00 and ends October 26 at 00:59:59.999

Noda Time zone intervals:
2011-03-26T23:00:00Z - 2014-10-25T22:00:00Z, +4, +0

I think that forcing myself to collect these small bits of data and write them down will be a bit of a game-changer. Previously I’ve taken handwritten notes for individual issues, relying on the “global” unit tests (check every transition in every time zone) to catch any problems after I’d implemented them. But with the data above, I can write unit tests. And those unit tests don’t need to depend on whether we’re running on Windows and Linux, which will make the whole thing much simpler. We’re not testing an actual time zone – we’re testing “adjustment rule to Noda Time representation” with adjustment rules as they would show up on Linux.

There’s one slightly fiddly bit: I suspect that detecting “base UTC offset delta” for .NET Core 3.1 will require the time zone itself (as we can’t get to the rule data). I might get all the rest of the unit tests working first (and even the non-zero-delta ones for .NET 6) and come back to that.

That’s all for now…

I’ve now implemented the above test data in uncommitted code. After starting to include strings directly into the code, I’ve decided to put all the test data in a text file, pretty much as it’s specified above (just with very minor formatting changes). This is going to be really handy in terms of having readable test cases; I’m already glad I’ve put the effort into it.

However, I’ve discovered that it’s incomplete, as we need test cases for offset changes across the international date line (in both directions). It’s also possible that the choice of America/Sao_Paulo is unfortunate, as Brazil changed clocks at midnight. We might want an example in Australia as well. (Potentially even two: one with whole hour offsets and one with half hour offsets.)

Even without that additional test, there are issues. I can get all but “Europe/Dublin in 1968” to work in .NET 6. I haven’t yet worked out how to handle changing standard offsets in .NET Core 3.1 in a testable way. Even the fact that standard offsets can change is a pain, in terms of working out the transition times in .NET 6, as it appears to be something like “Assume the start is in standard time and the end is in daylight time, except don’t take any standard time deltas into account when calculating that” – which is very weird indeed. (And I don’t understand how the Europe/Dublin data in .NET 6 is meant to convey the expected data. It’s very odd.)

This post is quite long enough though, so I’m going to post it now and take a break from time zones for a bit. Hopefully I’ll post a “part 2” when I’ve actually got everything working.

Just as a reminder, supposedly these changes in .NET 6: “don’t affect external behavior, other than to ensure correctness in some edge cases”. Mmm. Really.

Book updates for July 2021

Just a quick post with some updates around books and related events…

Software Mistakes and Tradeoffs: MEAP update

In June, I posted about the book that Tomasz Lelek and I are writing. (Well, Tomasz is doing the bulk of the work – only two of the thirteen chapters are by me, but I’ll take any credit I can get.)

I’m pleased to say the MEAP (Manning Early Access Program) of the book has been updated to include all chapters. It isn’t finished yet, but it means the first draft of the chapter I’ve written on versioning (chapter 12) is now included.

Tomasz and I are still working hard on the book… it’s great to see it getting into more people’s hands as we get closer to entering the production phase.

Eightwords

I’m delighted to have been asked to write forewords for two books:

I can heartily recommend both books.

Manning API Conference

The Manning API Conference is coming up soon, and I’ll be talking about network API versioning strategies. It’s a free virtual conference with a bunch of great speakers, so please sign up and I hope to see you in the chat.

New book: Software Mistakes and Tradeoffs

I’m delighted to announce that I’ve been hard at work contributing to a new book.

The book is called “Software Mistakes and Tradeoffs: How to make good programming decisions” and the principal author is Tomasz Lelek. The book was Tomasz’s idea, and he’s written the vast majority of the material, but I’ve contributed a chapter on handling date/time information, and another one around versioning (for libraries, network APIs and storage).

The aim of the book isn’t to provide answers: it’s to help you think carefully in your specific context, and ask the right questions. Tomasz and I have both made plenty of mistakes over the course of our careers – or been adjacent to other engineers making those mistakes. The choices that have been mistakes for us might not be a mistake for you – but it’s better to go into those choices with your eyes open to the trade-offs involved, and where they can lead in different situations.

This isn’t a book about a specific technology, although of course it demonstrates the ideas using examples which are specific. Almost all of the examples are in Java, but if you’re not a Java developer that really shouldn’t put you off: the ideas are easily transferrable to other environments. (In particular, if you understand C# it’s very unlikely that Java syntax will faze you.)

We’ve just launched the book into MEAP (Manning Early Access Program), with an estimated publication date of “fall 2021” (which means I really need to get on with polishing up my versioning chapter). The first seven chapters are available in the MEAP right now, which includes my date/time chapter.

What about C# in Depth?

You may be wondering where that leaves C# in Depth. The 4th edition of C# in Depth covers C# up to version 7, with a chapter looking ahead to C# 8 (which wasn’t finalized at the time of publication). That means I’m already two versions behind. So, what am I going to do about that?

The short answer is: nothing just yet. I haven’t started a 5th edition.

The longer answer is: yes, I definitely want to write a new edition at some point. However, I suspect the structure will need to change entirely (from version-based to topic-based) and I expect it to take a long time to write. Additionally, I have an idea around a diagnostics book which has morphed several times, but which I’m still keen on… and if I can get traction for that, it will probably take priority over C# in Depth, at least for a while.

So yes, one day… but probably sufficiently far in the future that it’s not worth asking any more until I announce something.

Playing with an X-Touch Mini controller using C#

Background

As I wrote in my earlier blog post about using OSC to control a Behringer XR16, I’m working on code to make our A/V system at church much easier to work with. From an audio side, I’ve effectively accomplished two goals already:

  • Remove the intimidating hardware mixer with about 150 physical knobs/buttons
  • Allow software to control both audio and visual aspects at the same time (for example, switching from “displaying the preacher with their microphone on” to “displaying hymn words with all microphones muted”)

I have a third goal, however, to accommodate those who are willing to work the A/V system but would really prefer to hardly touch a computer. The requires bringing hardware back into the system. Having removed a mixer, I want to introduce something a bit like a mixer again, but keeping it much simpler (and still with the ability for software to do most of the heavy lifting). While I haven’t blogged about it (yet), I’m expecting most of the “work” during a service to be performed via a Stream Deck XL. The technical aspects of that aren’t terribly interesting, thanks to the highly competent StreamDeckSharp library. But there are plenty of interesting UI design decisions – some of which may be simple to those who know about UI design, but which I find challenging due to a lack of existing UI skills.

The Stream Deck only provides press buttons though – there’s no form of analog control, which is typically what you want to adjust an audio level. Also, I’m not expecting the Stream Deck to be used in every situation. If someone is hosting a meeting with a speaker who needs a microphone, but the meeting isn’t being shared on Zoom, and there are no slides (or just one slide deck for the whole meeting), then there’s little benefit in going for the full experience. You just want a simple way of controlling the audio system.

For those who are happy using a computer, I’m providing a very simple audio mixer app written in WPF for this – a slider and a checkbox per channel, basically. I’ve been looking for the best way to provide a similar experience for those would prefer to use something physical, but without adding significant complexity or financial cost.

Physical control surfaces

I’ve been looking at all kinds of control surfaces for this purpose, and my previous expectation was that I’d use a Loupedeck Live. I’m currently somewhat blocked by the lack of an SDK for it (which will hopefully go public in the summer) but I’m now not sure I’ll use it in the church building anyway. (I’m sure I’ll find fun uses for it at home though. I don’t regret purchasing one.) My other investigations for control surfaces found the Monogram modular system which looks amazing, but which is extremely expensive. In an ideal world, I would like a control surface which has the following properties, in roughly descending order of priority:

  1. Easy to interact with from software (e.g. an SDK, network protocol, MIDI or similar – with plenty of documentation)
  2. Provides analog, fine-grained control so that levels can be adjusted easily
  3. Provides visual output for the state of the system
  4. Modular (so I can have just the controls we need, to keep it simple and unintimidating) or at least available with “roughly the set of controls we need and not much more”
  5. Has small displays (like the Stream Deck) so channel labels (etc) could be updated dynamically in software

Point 3 is an interesting one, and is where most options fall down. The two physical form factors that are common for adjust levels are rotary knobs, and faders (aka sliders). Faders are frankly a little easier to use than knobs, but both are acceptable. The simple version for both of these assumes that it has complete control over the value being adjusted. A fader’s value is simply the vertical position. Simple knobs often have line or other indications of the current value, and hard stops at the end of the range (i.e. if you turn them to the maximum or minimum value, you can’t physically turn them any further). Likewise, simple buttons used for muting are usually pressed or not, on a toggle basis.

All of these simple controls are inappropriate for the system I want to build, because changes from other parts of the system (e.g. my audio mixer app, or the full service presentation app, or the X-Air application that comes with XR mixers) couldn’t be shown on the physical control surface: anyone looking at the control surface would get the wrong impression of the current state.

However, there are more flexible versions of each control:

  • There are knobs which physically allow you to keep turning them forever, but which have software-controlled rings of lights around them to show the current logical position.
  • There are motorized faders, whose position can be adjusted by software
  • There are buttons that always come back up (like keys on a keyboard) but which have lights to indicate whether they’re logically “on” or “off”.

If I had unlimited budget, I’d probably go with motorized faders (although it’s possible that they’d be a bit disconcerting at first, moving around on their own). They tend to be only available on expensive control surfaces though – often on systems which do far more than I actually want them to, with rather more controls than I want. The X-Touch Compact is probably the closest I’ve found, but it’s overkill in terms of the number of controls, and costs more than I want to spent on this part of the system.

Just to be clear: I have nothing against control surfaces which don’t meet my criteria. What I’m building is absolutely not the typical use case. I’m sure all the products I’ve looked at are highly suitable for their target audiences. I suspect that most people using audio mixers either as a hobby or professionally are tech savvy and don’t mind ignoring controls they don’t happen to need right now. If you’re already using a DAW (Digital Audio Workstation), the hardware complexity we’re talking about is no big deal. But the target audience for my system is very, very different.

Enter the X-Touch Mini

Last week, I found the X-Touch Mini – also by Behringer, but I want to stress that this is pretty much coincidental. (I could use the X-Touch Mini to control non-Behringer mixers, or a non-Behringer controller for the XR16/XR18.) It’s not quite perfect for our needs, but it’s very close.

It consists of the following controls:

  • 8 knobs without hard stops, and with light ring displays. These can also be pressed/released.
  • A top row of 8 unlabeled buttons
  • A bottom row of labeled buttons (MC, rewind, fast forward, loop, stop, play and record)
  • One unmotorized fader
  • Two “layer” buttons

My intention is to use these as follows:

  • Knobs will control the level of each individual channel, with the level being indicated by the light ring
  • Unlabeled buttons will control muting, with the buttons for active (unmuted) channels lit
  • The fader will control the main output volume (which should usually be at 0 dB)

That leaves the following aspects unused:

  • The bottom row of buttons
  • Pressing/releasing knobs
  • The layer buttons

The fact that the fader isn’t motorized is a minor inconvenience, but the fact that it won’t represent the state of the system (unless the fader was the last thing to change it) is relatively insignificant compared with the state of the channels. We tend to tweak individual channels much more than the main volume… and if anyone does set the main volume from the X-Touch Mini, it’s likely that they’ll do so consistently for the whole event, so any “jump” in volume would only happen once.

So, that’s the physical nature of the device… how do we control it?

Standard mode and Mackie Control mode

One of the recurring themes I’ve found with audio equipment is that there are some really useful protocols that are woefully under-documented. That’s often because different physical devices will interpret the protocol in slightly different ways to account for their control layout etc. I completely understand why it’s tricky (and that writing documentation isn’t particularly fun – I’m as guilty as anyone else of putting it off) but it’s still frustrating. This also goes back to me not being a typical user, of course. I suspect that the vast majority of users can plug the X-Touch Mini into their computers, fire up their DAW and get straight to work with it, potentially configuring it within the DAW itself.

Still, between the user manual (which is generally okay) and useful pages scattered around the web (particularly this Stack Overflow answer) I’ve worked things out a lot more. The interface is entirely through MIDI messages (over USB). Fortunately, I already have a fair amount of experience in working with MIDI from C#, via my V-Drum Explorer project. The controller acts as both a MIDI output (for button presses etc) and a MIDI input (to receive light control messages and the like).

The X-Touch Mini has two different modes: “standard” mode, and Mackie Control mode. That’s what the MC on the bottom row of buttons means; that button is used to switch modes while it’s starting up, but can be used for other things once it’s running. The Mackie Control protocol (also known as Mackie Control Universal or MCU) is one of the somewhat-undocumented protocols in the audio world. (It may well be documented exhaustively across the web, but it’s not like there’s one obviously-authoritative source with all the details you might like which comes up with a simple search.)

In standard mode, the X-Touch Mini expects to be primarily in charge of the “display” aspect of things. While you can change the button and knob lights through software, next time you do anything with that control it will reset itself. That’s probably great for simple integrations, but makes it harder to use as a “blank canvas” in the way that I want to. Standard mode is also where the layer buttons have meaning: there are two layers (layer A and layer B), effectively doubling the number of knobs/buttons, so you could handle 16 channels, channels 1-8 on layer A and channels 9-16 on layer B.

In Mackie Control mode, the software controls everything. The hardware doesn’t even keep a notional track of the position of a knob – the messages are things like “knob 1 moved clockwise with speed 5” etc. Very slightly annoyingly, although there are 13 lights in the light ring around each knob, only 11 are accessible within Mackie Control Mode – due to limitations of the protocol, as I understand it. But other than that, it’s pretty much exactly what I want: direct control of everything, without the X-Touch Mini getting in the way by thinking it knows what I want it to do.

I’ve created a library which allows you to use the X-Touch Mini in both modes, in a reasonably straight-forward way. It doesn’t try to abstract away the differences between the two modes, beyond the fact that both allow you to observe button presses, knob presses, and knob turns as events. There’s potentially a little more I could do to push commonality up the stack a bit, but I suspect it would rarely be useful – I’d expect most apps to work in one mode or the other, but not both.

Interfacing with the XR16/XR18

This part was the easy bit. The audio mixer WPF app has a model of “a channel” which allows you to send an update request, and provides information about the channel name, fader position, mute state, and even current audio level. All I had to do was translate MIDI output from the X-Touch Mini into changes to the channel model, and translate property changes on the channel model into changes to the light rings and button lights. The code for this, admittedly without any tests and very few comments, is under 200 lines in total (including using directives etc).

It’s not always easy to imagine what this looks like in reality, so I’ve recorded a short demo video on YouTube. It shows the X-Touch Mini, along with X-Air and the WPF audio mixer app, all synchronized and working together beautifully. (I don’t actually demonstrate the main volume fader on the video, but I promise it works… admittedly the values on the physical fader don’t all align perfectly with the values on the mixer, but they’re not far off… and the important 0 dB level does line up.)

One thing I show in the demo is how channels 3 and 4 form a stereo pair in the mixer. The X-Touch Mini code doesn’t have any configuration telling it that at all, and yet the lights all work as you’d want them to. This is a pleasant quirk of the way that the lighting code is hooked up purely to the information provided by the mixer. When you press a button to unmute a channel, for example, that code sends a request to the mixer, but does not light the button. The light only comes on because the mixer then notifies everything that the channel has been unmuted. When you do anything with channels 3 or 4, the mixer notifies all listening applications about changes to both 3 and 4, and the X-Touch Mini just reacts accordingly to update the light ring or button. It makes things a lot simpler than having to try to keep an independent model of what the X-Touch Mini “thinks” the mixer state is.

I was slightly concerned to start with that this aspect of the design would make it unresponsive when turning a knob: several MIDI events are generated, and if the latency between “send request to mixer” and “mixer notifies apps of change” was longer than the gap between the MIDI events, that would cause problems. Fortunately, that doesn’t seem to be the case – the mixer responds very quickly, before the follow-up MIDI requests from the X-Touch for continued knob turning are sent.

Show me the code!

All the code for this is in my GitHub DemoCode repo, in the XTouchMini directory.

Unless you happen to have an X-Touch Mini, it probably won’t be much use to you… but you may want to have a look at it anyway. I don’t in any way promise that it’s rock-solid, or particularly elegant… but it’s a reasonable start, I think.

That’s all for now… but I’m having so much fun with hardware integration projects that I wouldn’t be surprised to find I’m writing more posts like this over the summer.

OSC mixer control in C#

In some senses, this is a follow on from my post on VISCA camera control in C#. It’s about another piece of hardware I’ve bought for my local church, and which I want to control via software. This time, it’s an audio mixer.

Audio mixers: from hardware controls to software controls

The audio mixer we’ve got in the church building at the moment is a Mackie CFX 12. We’ve had it for a while, and it does the job really well. I have no complaints about its capabilities – but it’s really intimidating for non-techie folks, with about 150 buttons/knobs/faders, most of which never need to be touched (and indeed shouldn’t be touched).

I would like to get to a situation where the church stewards can use something incredibly simple that reflects the semantic change they want (“we’re singing a hymn”, “someone is reading a Bible passage”, “the preacher is starting the sermon” etc) and takes care of adjusting what’s being projected onto the screen, what’s happening with the sound, what the camera is pointing at, and what’s being transmitted via Zoom.

I can’t do that with the Mackie CFX 12 – I can’t control it via software.

Enter the Behringer XR16 – a digital audio mixer. (There are plenty of other options available. This had good reviews, and at least signs of documentation.) Physically, this is just a bunch of inputs and outputs. The only controls on it are a headphone volume knob, and the power switch. Everything else is done via software. The X-Air application can control everything from a desktop, iOS or Android device, which is a good start… but that’s still much too complicated. (Indeed, I find it rather intimidating myself.)

Open Sound Control

Fortunately, the XR16 (along with its siblings, the XR12 and XR18, and the product it was derived from, the X32) implement the Open Sound Control protocol, or OSC. They implement this over UDP, and once you’ve found some documentation, it’s reasonably straightforward. Hat tip at this point to Patrick-Gilles Maillot for not only producing a mass of documentation and code for the X32, but also responding to an email asking whether he had any documentation for the X-Air series (XR-12/16/18)… the document he sent me was invaluable. (Behringer themselves responded to a tech support ticket with a brief but useful document too, which was encouraging.)

OSC consists of packets, each of which has an address such as “/ch/01/mix/on” (the address for muting or unmuting the first input channel) and potentially parameters. For example, to find out whether channel 1 is currently muted, you send a packet consisting of just the address mentioned before. The mixer will respond with a packet with the same address, and a parameter value of 0 if the channel is muted, or 1 if it’s not. If you want to change the value, you send a packet with the parameter. (This is a little like the Roland MIDI protocol for V-Drums – the same command is used to report state as to change state.)

You can also send a packet with an address of “/xremote” to request that for the next 10 seconds, the mixer sends any data changes (e.g. made by other applications, or even the one sending it). Subscribing to volume meters is slightly trickier – there are indexer meter addresses (“/meters/0”, “/meters/1” etc) which mean different things on different devices, and each response has a blob of data with multiple values in. (This is for efficiency: there are many, many meters to monitor, and you wouldn’t want each of them sending a separate packet at 50ms intervals.)

OSC in .NET

The OscCore .NET package provided everything I needed in terms of parsing and formatting OSC packets, so it didn’t take too long to write a prototype experimentation app in WPF.

The screenshot below shows effectively two halves of the UI: one for sending OSC packets manually and logging and packets received, and the other for putting together a crude user interface for more reasonable control. This shows just five inputs on the top, then six aux (mono) outputs and the main stereo output on the bottom.

This is the sort of thing a church steward would need, although the “per aux output” volume control is probably unnecessary – along with the VU meters. I still need to work out exactly what the final application will need (bearing in mind that I’m hoping tweaks will be rare – most of the time the “main” control aspect of the app will do everything), but it’s easier to come up with designs when there’s a working prototype.

OSC mixer app

One interesting aspect of this architecturally is that when a slider is changed in the app, the code currently just sends the command to change the value to the mixer. It doesn’t update the in-memory value… it waits for the mixer to send back a “this value has changed” packet, and that updates the in-memory value (which then updates the position of the slider on the screen). That obviously introduces a bit of lag – but the network and mixer latency is small enough that it isn’t actually noticeable. I’m still not entirely sure it’s the right decision, but it does give me more confidence that the change in value has actually made it to the mixer.

Conclusion

There’s definitely more work to do in terms of design – I’d quite like to move all the Mixer and Channel model code into the “core” library, and I’ll probably do that before creating any “production” applications… but for now, it’s at least good enough to put on GitHub. So it’s available in my democode repo. It’s probably no use at all if you don’t have an XR12/XR16/XR18 (although you could probably tweak it pretty easily for an X18).

But arguably the point of this post isn’t to reach the one or two people who might find the code useful – it’s to try to get across the joy of playing with a hobby project. So if you’ve got a fun project that you haven’t made time for recently, why not dust it off and see what you want to do with it next?