Tracking down a performance hit

I’ve been following the progress of .NET Core with a lot of interest, and trying to make the Noda Time master branch keep up with it. The aim is that when Noda Time 2.0 eventually ships (apologies for the delays…) it will be compatible with .NET Core from the start. (I’d expected to be able to support netstandard1.0, but that appears to have too much missing from it. It looks like netstandard1.3 will be the actual target.)

I’ve been particularly looking forward to being able to run the Noda Time benchmarks (now using BenchmarkDotNet) to compare .NET Core on Linux with the same code on Windows. In order to make that a fair comparison, I now have two Intel NUCs, both sporting an i5-5250U and 8GB of memory.

As it happens, I haven’t got as far as running the benchmarks under .NET Core – but I am now able to run all the unit tests on both Linux and Windows, using both the net451 TFM and netcoreapp1.0.

When I did that recently, I was pretty shocked to see that (depending on which tests I ran) the tests were 6-10 times slower on Linux than on Windows, using netcoreapp1.0 in both cases. This post is a brief log of what I did to track down the problem.

Step 1: Check that there’s really a problem

Thought: Is this actually just a matter of not running the tests in a release configuration, or something similar?

Verification: I ran the tests several times, specifying -c Release on the command line to use the release build of both NodaTime.Test.dll and NodaTime.dll. Running under a debugger definitely wasn’t an issue, as this was all just done from the shell.

Additionally, I ran the tests in two ways – firstly, running the whole test suite, and secondly running with --where=cat!=Slow to avoid the few tests I’ve got which are known to be really pretty slow. They’re typically tests which compare the answers the BCL gives with the answers Noda Time gives, across the whole of history for a particular calendar system or time zone. I’m pleased to report that the bottleneck in these tests is almost always the BCL, but that doesn’t help to speed them up. If only the “slow” tests had been much slower on Linux, that might have pointed to the problems being in BCL calendar or time zone code.

The ratios vary, but there was enough of a problem under both circumstances for it to be worth looking further.

Step 2: Find a problematic test

I didn’t have very strong expectations one way or another about whether this would come down to some general problem in the JIT on Linux, or whether there might be one piece of code causing problems in some tests but not others. Knowing that there are significant differences in handling of some culture and time zone code between the Linux and Windows implementations, I wanted to find a test which used the BCL as little as possible – but which was also slow enough for the differences in timing to be pronounced and not easily explicable by the problems of measuring small amounts of time.

Fortunately, NUnit produces a TestResult.xml file which is easy to parse with LINQ to XML, so I could easily transform the results from Windows and Linux into a list of tests, ordered by duration (descending), and spot the right kind of test.

I found my answer in UmAlQuraYearMonthDayCalculatorTest.GetYearMonthDay_DaysSinceEpoch, which effectively tests the Um Al Qura calendar for self consistency, by iterating over every day in the supported time period and checking that we can convert from “days since Unix epoch” to an expected “year, month day”. In particular, this test doesn’t rely on the Windows implementation of the calendar, nor does it use any time zones, cultures or anything similar. It’s nicely self-contained.

This test took 2051ms on Linux and 295ms on Windows. It’s possible that those figures were from a debug build, but I repeated the tests using a release build and confirmed that the difference was still similar.

Step 3: Find the bottleneck

At this point, my aim was to try to remove bits of the test at a time, until the difference went away. I expected to find something quite obscure causing the difference – something like different CPU cache behaviour. I knew that the next step would be to isolate the problem to a small piece of code, but I expected that it would involve a reasonable chunk of Noda Time – at least a few types.

I was really lucky here – the first and most obvious call to remove made a big difference: the equality assertion. Assertions are usually the first thing to remove in tests, because everything else typically builds something that you use in the assertions… if you’re making a call without either using the result later or asserting something about the result, presumably you’re only interested in side effects.

As soon as I removed the call to Assert.AreEqual(expected, actual), the execution time dropped massively on Linux, but hardly moved on Windows: they were effectively on a par.

I wondered whether the problem was with the fact that I was asserting equality between custom structs, and so tried replacing the real assertions with assertions of equality of strings, then of integers. No significant difference – they all showed the same discrepancy between Windows and Linux.

Step 4: Remove Noda Time

Once I’d identified the assertions as the cause of the problem, it was trivial to start a new test project with no dependency on Noda Time, consisting of a test like this:

[Test]
public void Foo()
{
    for (int i = 0; i < 1000000; i++)
    {
        var x = 10;
        var y = 10;
        Assert.AreEqual(x, y);
    }
}

This still demonstrated the problem consistently, and allowed simpler experimentation with different assertions.

Step 5: Dig into NUnit

For once in my life, I was glad that a lot of implementation details of a framework were exposed publicly. I was able to try lots of different “bits” of asserting equality, in order to pin down the problem. Things I tried:

  • Assert.AreEqual(x, y): slow
  • Assert.That(x, Is.EqualTo(y)): slow
  • Constructing an NUnitEqualityComparer: fast
  • Calling NUnitEqualityComparer.AreEqual: fast. (Here the construction occurred before the loop, and the comparisons were in the loop.)
  • Calling Is.EqualTo(y): slow

The last bullets two bullets were surprising. I’d been tipped off that NUnitEqualityComparer uses reflection, which could easily differ in performance between Windows and Linux… but checking for equality seemed to be fast, and just constructing the constraint was slow. In poking around the NUnit source code (thank goodness for Open Source!) it’s obvious why Assert.AreEqual(x, y) and Assert.That(y, Is.EqualTo(x)) behave the same way – the former just calls the latter.

So, why is Is.EqualTo(y) slow (on Linux)? The method itself is simple – it just creates an instance of EqualConstraint. The EqualConstraint constructor body doesn’t do much… so I proved that it’s not EqualConstraint causing the problem by deriving my own constraint with a no-op implementation of ApplyTo… sure enough, just constructing that is slow.

That leaves the constructor of the Constraint abstract base class:

protected Constraint(params object[] args)
{
    Arguments = args;

    DisplayName = this.GetType().Name;
    if (DisplayName.EndsWith("`1") || DisplayName.EndsWith("`2"))
        DisplayName = DisplayName.Substring(0, DisplayName.Length - 2);
    if (DisplayName.EndsWith("Constraint"))
            DisplayName = DisplayName.Substring(0, DisplayName.Length - 10);
}

That looks innocuous enough… but maybe calling GetType().Name is expensive on Linux. So test that… nope, it’s fast.

At this point I’m beginning to wonder whether we’ll ever get to the bottom of it, but let’s just try…

[Test]
public void EndsWith()
{
    string text = "abcdefg";
    for (int i = 0; i < Iterations; i++)
    {
        text.EndsWith("123");
    }
}

… and sure enough, it’s fast on Windows and slow on Linux. Wow. Looks like we have a culprit.

Step 6: Remove NUnit

At this point, it’s relatively plain sailing. We can reproduce the issue in a simple console app. I won’t list the code here, but it’s in the GitHub issue. It just times calling EndsWith once (to get it JIT compiled) and then a million times. Is it the most rigorous benchmark in the world? Absolutely not… but when the difference is between 5.3s on Linux and 0.16s on Windows, on the same hardware, I’m not worried about inaccuracy of a few milliseconds here or there.

Step 7: File a CoreCLR issue

So, as I’ve shown, I filed a bug on GitHub. I’d like to think it was a pretty good bug report:

  • Details of the environment
  • Short but complete console app ready to copy/paste/compile/run
  • Results

Exactly the kind of thing I’d have put into a Stack Overflow question – when I ask for a minimal, complete example on Stack Overflow, this is what I mean.

Anyway, about 20 minutes later (!!!), Stephen Toub has basically worked out the nub of it: it’s a culture issue. Initially, he couldn’t reproduce it – he saw the same results on Windows and Linux. But changing his culture to en-GB, he saw what I was seeing. I then confirmed the opposite – when I ran the code having set LANG=en-US, the problem went away for me. Stephen pulled Matt Ellis in, who gave more details as to what was going wrong behind the scenes.

Step 8: File an NUnit issue

Matt Ellis suggested filing an issue against NUnit, as there’s no reason this code should be culture-sensitive. By specifying the string comparison as Ordinal, we can go through an even faster path than using the US culture. So

if (DisplayName.EndsWith("Constraint"))

becomes

if (DisplayName.EndsWith("Constraint", StringComparison.Ordinal))

… and the equivalent for the other two calls.

I pointed out in the issue that it was also a little bit odd that this was being worked out in every Constraint constructor call, when of course it’s going to give the same result for every instance of the same type. When “every Constraint constructor call” becomes “every assertion in an entire test run”, it’s a pretty performance-critical piece of code. While unit tests aren’t important in terms of performance in the same way that production code is, anything which adds friction is bad news.

Hopefully the NUnit team will apply the simple improvement for the next release, and then the CoreCLR team can attack the tougher underlying problem over time.

Step 9: Blog about it

Open up Stack Edit, start typing: “I’ve been following the progress”… :)

Conclusion

None of the steps I’ve listed here is particularly tricky. Diagnosing problems is often more a matter of determination and being unwilling to admit defeat than cleverness. (I’m not denying that there’s a certain art to being able to find the right seam to split the problem in two, admittedly.)

I hope this has been useful as a “start to finish” example of what a diagnostic session can look and feel like. It wasn’t one physical session, of course – I found bits of time to investigate it over the course of a day or so – but it would have been the same steps either way.

Smug, satisfied smile…

Versioning conundrum for Noda Time – help requested

Obviously I’d normally ask developer questions on Stack Overflow but in this case, it feels like the answers may be at least somewhat opinion-based. If it turns out that it’s sufficiently straightforward that a Stack Overflow question and answer would be useful, I can always repost it there later.

The Facts

Noda Time 1.x exists “in production”, and the latest version is 1.3.1. This targets .NET 3.5 Client profile, .NET 4.0, and PCL Profile 328 (in a directory of lib\portable-net4+sl5+netcore45+wpa81+wp8+MonoAndroid1+MonoTouch1+XamariniOS1)

Noda Time currently includes the IANA time zone data (“TZDB”) – each released version of Noda Time contains the TZDB version that was “most recent” at the time that the Noda Time release was built. This gets out of date quite quickly, as there are multiple releases of TZDB every year. Those releases are named 2016a, 2016b etc. Noda Time also provides the ability to read .nzd files (Noda Zone Data – a custom format) and every time there’s a new release of TZDB, I build a .nzd file and upload it to nodatime.org, updating http://nodatime.org/tzdb/latest.txt to point to the latest version.

Noda Time 2.0 has not been released yet. When I do release it, I expect to target .NET 4.5 and netstandard1.0.

Each Noda Time 1.x release has an AssemblyVersion just based on major/minor, i.e. 1.0, 1.1, 1.2 etc. Based on this blog post, this may have been a mistake – it should quite possibly have been 1.0 for all versions. Obviously I can’t fix that now, but I can make the 2.x releases use 2.0 everywhere.

When 2.0 is “pretty much ready” we’re going to cut a 1.4 release which deprecates things that are removed in 2.0 and provides the new approaches as far as possible. For example, the IClock.Now property from 1.x is removed in 2.0, and replaced by IClock.GetCurrentInstant(). We’ll deprecate the Now property and introduce a GetCurrentInstant() extension method which delegates to it. This shouldn’t break any 1.x users, but should allow them to move over to the new API as far as possible before upgrading to 2.0. The intention is that users wouldn’t stay on 1.4 for very long. (Obviously they could do so, but there’s not a lot of benefit. 1.4 won’t have new features – it’s really just a transition version.)

So far, that’s just the way of the world. Now I want to make it easier for users to stay up-to-date with TZDB – including if nodatime.org goes down. (That’s considerably more likely than nuget.org going down, for example.)

The plan is to introduce a new nearly-data-only assembly, packaged as NodaTime.Tzdb. The aim is to allow users to update their data dependency at build time, in a controlled fashion. If you only want to specify an exact version to depend on, you can do so. If you want to pick up the latest version every time you build, that should be possible too.

The tricky bits come in terms of the versioning.

Some options

Firstly, the versioning scheme for the package ignoring everything else. I plan to use something like this:

  • 2016a => 1.2016.1
  • 2016b => 1.2016.2
  • 2016c => 1.2016.3
  • 2017a => 1.2017.1

This should make it reasonably easy to tell the TZDB version just from the package version.

However, I’m considering a few options within this idea:

  • I could create a single package per TZDB release, targeting .NET 3.5 client profile, .NET 4.0, the Profile 328 PCL, .NET 4.5, and .NET Standard 1.0. The first four of these could depend on Noda Time 1.1, and the last one would have to depend on Noda Time 2.0.
  • I could do the above, but depend on 1.3.1 instead of 1.1.
  • I could create one package with two versions per TZDB release – a 1.x depending on Noda Time 1.1, and a 2.x depending on Noda Time 2.0. For example, when TZDB 2016d is released, I could create 1.2016.4 and 2.2016.4.
  • I could create one package version depending on 1.1, one depending on 1.2, one depending on 1.3, one depending on 1.4 (when that exists) and one depending on 2.0.
  • I could create two separate packages, i.e. include the Noda Time major version number in the package name. I don’t like this idea, but it’s on the table.

Some concerns and questions

There are various aspects to this which cause me a few worries. I’m not sure how well I can really structure or segregate those, so I’ll just list them.

  • Can a non-prerelease package depend on a prerelease package for some frameworks? If not, that possibly blows the “single version” idea out of the water, as I can’t depend on NodaTime v2.0 yet – it’s not out.
  • Even if that’s feasible, is it sane to depend on different major versions of the NodaTime package from within a single version of the NodaTime.Tzdb package, or is that going to cause massive confusion?
  • Should I depend on NodaTime v1.1 or v1.3.1? They have different AssemblyVersion numbers, which I believe means an assembly binding redirect will be required if I depend on 1.1 but users depend on 1.3.1. To be clear, I don’t expect many users to still be on versions older than 1.3.1.
  • Likewise, is it going to cause issues for .NET 4.5 users who use NodaTime 2.0 (eventually) if they depend on a version of NodaTime.Tzdb that depends on NodaTime 1.3.1? Again, presumably assembly binding redirects are needed.
  • If I go with the “two-version” scheme (i.e. 1.2016.4 and 2.2016.4 etc) how careful would NodaTime 1.3.1 users have to be? I wouldn’t want them to accidentally get upgraded to NodaTime 2.0 when that’s released, by accidentally taking the 2.x line of NodaTime.Tzdb.
  • Does dotnet cli support the nuget allowedVersions feature at all? I haven’t found any support for it in DNX, but really it’s vital for this scheme to work at all – basically I’d expect a NodaTime 1.3.1 user to specify an allowed version range for NodaTime.Tzdb of [1,2)
  • Is my scheme of 1.2016.4 (etc) sensible? It’s somewhat abusing major/minor/patch, in that there’s no real difference in meaning between a minor version bump (“it’s the new year”) and a patch bump (“there’s been another release in the same year”). Neither kind of change will be breaking (unless you depend on specific time zones behaving in specific ways, of course), and it’s handy to be able to give a simple mapping between TZDB version and package version, but there may be consequences I’m unaware of.

Please feel free to ask clarifying questions in comments. Will look forward to getting some answers :)

Ultimate Man Cave: voice automation for my shed

Source code for everything is on Github. It probably won’t be useful to you unless you’ve got very similar hardware to mine, but you may want to just have a look.

Background

Near the end of 2015, we had a new shed built at the back of our garden. The term “shed” is downplaying it somewhat – it’s a garden building, about 7m x 2.5m, with heating, lighting and an ethernet connection from the house.

It’s divided in half, with one half being a normal shed (lawnmower, wheelbarrow, tools etc) and one half being my office for working from home. Both sides are also used for general storage – we have a lot of stuff to sort out from a loft conversion a few years ago.

The shed

It only took about three days of using the shed for me to work out that I wanted remote-controlled lighting. If I’m going out there at 6.30am in winter, it’s pretty dark – so it’s really useful to be able to turn the lights on from the house first, so I can negotiate the muddier bits of the garden, see the keyhole to unlock it etc.

After a little research, this turned out to be pretty easy: MiLight is simple and relatively cheap. The equivalent of $100 got me four lights and a wifi controller box. It only took me a few minutes to configure it to talk to my wifi, install the Light Controller android app, and I could easily turn my lights on and off from my phone from the house, before stepping outside. Yay. First steps to home automation.

I won’t go into all the details of the rest of the tech in my shed, but the important parts for the purposes of this post are:

Command-line automation

Sometimes, I’m too lazy to reach for my phone when I want to turn on the lights. Very much a first world problem, I realize. And not so much a problem, as an opportunity to see what’s feasible.

So, I looked around the net for code related to MiLight / EasyBulb, and found (amongst other things) Andy Scott’s MiLight.NET library on Github. A small amount of tweaking, and I had a short console app allowing me to run “lights on” or “lights off” which did the obvious thing. Amongst other things, copying this onto an Intel NUC allowed me to turn the lights off via remote desktop when Holly messaged me at the (Google) office to tell me that I’d left them on. It also meant I could schedule a task to turn the lights off at 10.30pm automatically, in case I forgot when I came in.

For a few months, that kept me satisfied… but it was never going to be the final solution.

The next step was to look at other aspects I could automate, and both the amplifier/receiver and the Sonos unit were obvious targets. I knew both had network support, as I already had apps for both on my phone, but I had no idea what the protocols involved were. The amplifier lives in an A/V cabinet, and I normally keep the doors of that shut – so just turning it on, setting the source, and changing the volume either involved getting the phone out or opening the cabinet. Again, could do better.

Sonos supports UPnP/SOAP for control. An old blog post got me started, and then I used Intel Device Spy to work out what else I could easily do. (I don’t have very demanding requirements – just play/pause, set volume, next/previous track is fine.)

It turns out that Onkyo has its own protocol called ISCP (Integra Serial Control Protocol) which has a network binding called eISCP. There’s remarkably good documentation in the form of an Excel spreadsheet, providing more information than I’m ever likely to need.

Implementing both of these was slightly faffy. The eISCP code didn’t work for some time, then started working – presumably with some minor tweak, but it wasn’t clear to me which of the many tweaks I made actually fixed it. The Sonos code worked fairly soon, but was very inelegant for quite a while.

Initially, this was all driven from the command line. I introduced a very simple sort of discovery, separating out controllers from their commands:

public interface IController
{
    string Name { get; }
    IImmutableList<ICommand> Commands { get; }
}

public interface ICommand
{
    string Name { get; }
    string Description { get; }

    void Execute(params string[] arguments);
}

There’s then a Factory class with a static AllControllers property. (I’m not keen on the naming here, but we’ll come to that later.)

The fact that Execute takes a string array is indicative of its use for a command line application – although looking at it now, I might have made it IEnumerable given that I’ll always be skipping the first actual argument which identifies the controller.

Anyway, this allows a very simple command line app which doesn’t know anything about lights, music etc – it just offers you the controllers and commands it finds.

There’s only actually one implementation of IController, calledReflectiveController. You pass it the real controller to wrap, which can be any instance of a type with a description and with public methods which also have descriptions. These descriptions are provided with an attribute. The arguments passed to Execute are then converted to the method parameter types using Convert.ChangeType. Crude but effective.

With this in place, adding a new command to an existing controller is just a matter of adding a public method. Adding a new controller is just a matter of creating a new class with a description, and adding it to the list of controllers in Factory. It’s all really, really simple.

Deploy to the Pi!

This was the aim all along, of course – I’ve been wanting to try out Windows IoT edition, and put my Raspberry Pi to good use, and try out Windows UAP to get a feeling for it. (In particular, I want to learn about some of the constraints I’ll run into with Noda Time 2.0.) This project was a fantastic excuse to do all three.

I started off by building the application just on my laptop. This is one of the lovely benefits of universal apps – you can get them working in a convenient environment first, then deploy elsewhere when you’re ready.

In fact, the very first version of the app didn’t have any speech recognition – it just had buttons to turn the lights on or off. I checked that this worked on both my laptop and the Raspberry Pi – it was nice to see that Windows IoT still supports a UI over HDMI, and it all worked fine, first time. A few years ago, this would have been absolutely stunning in itself – but I think we’re starting to take portability for granted.

Voice automation

On to the final steps: adding speech recognition.

I had a bit of a false start, as there are multiple approaches to speech recognition in Windows UAP. Initially I tried using Cortana, but never got that to work. Instead, I went with the Windows.Media.SpeechRecognition library, which worked pretty much immediately. Again, my initial attempt was more complicated than it needed to be, using an SRGS grammar file. This worked, but it was fiddly. When I discovered the SpeechRecognitionListConstraint class, it was beautiful… it’s literally just a list of strings, and the speech recognizer raises an event when any of those strings is recognized.

The code required to start the speech recognition is trivial:

private async void RegisterVoiceActivation(object sender, RoutedEventArgs e)
{
    recognizer = new SpeechRecognizer
    {
        Constraints = { new SpeechRecognitionListConstraint(handlers.Keys) }
    };
    recognizer.ContinuousRecognitionSession.ResultGenerated += HandleVoiceCommand;
    recognizer.StateChanged += HandleStateChange;

        SpeechRecognitionCompilationResult compilationResult = await recognizer.CompileConstraintsAsync();

    if (compilationResult.Status == SpeechRecognitionResultStatus.Success)
    {
        await recognizer.ContinuousRecognitionSession.StartAsync();
    }
    else
    {
        await Dispatcher.RunIdleAsync(_ => lastState.Text = $"Compilation failed: {compilationResult.Status}");
    }
}

Given the way we’re compiling the constraints, I’d be reasonably happy not checking the compilation result, but I just never took that code away after using it for SRGS (where it was very much required).

The HandleVoiceCommand method just checks whether the recognition confidence is above a certain threshold (0.6 at the moment, but I may tweak it down a bit), and if so, it consults a dictionary to find out a delegate to invoke. It also updates the UI for diagnostic purposes. The dictionary itself is the only code that knows about the shed controllers, using import static to avoid having Factory. everywhere:

private const string Prefix = "shed ";

private static readonly Dictionary<string, Action> handlers = new Dictionary<string, Action>
{
    { "lights on", Lighting.On },
    { "lights off", Lighting.Off },
    { "music play", Sonos.Play },
    { "music pause", Sonos.Pause },
    { "music mute", () => Sonos.SetVolume(0) },
    { "music quiet", () => Sonos.SetVolume(30) },
    { "music medium", () => Sonos.SetVolume(60) },
    { "music loud", () => Sonos.SetVolume(90) },
    { "music next", Sonos.Next },
    { "music previous", Sonos.Previous },
    { "music restart", Sonos.Restart },
    { "amplifier on", Amplifier.On },
    { "amplifier off", Amplifier.Off },
    { "amplifier mute", () => Amplifier.SetVolume(0) },
    { "amplifier quiet", () => Amplifier.SetVolume(30) },
    { "amplifier medium", () => Amplifier.SetVolume(50) },
    { "amplifier loud", () => Amplifier.SetVolume(60) },
    { "amplifier source pie", () => Amplifier.Source("pi") },
    { "amplifier source sonos", () => Amplifier.Source("sonos") },
    { "amplifier source playstation", () => Amplifier.Source("ps4") }
}.WithKeyPrefix(Prefix);

Here, WithKeyPrefix is just a small extension method to create a new dictionary with a specified prefix to each key.

Just like with the command line version, adding a command is now simply a matter of adding a single entry in this dictionary.

Deploy that on my Raspberry Pi, and as if by magic, I can say “shed lights on” and the lights come on, etc. Admittedly after saying “shed music play” it can be quite tricky to launch further actions, as the music interferes with the speed recognition for obvious reasons.

Simple code for the win

I’d like to take a few moments to talk about the code. At this point, you may want to have Github open in another tab to follow along.

There are lots of things about the code which I’d deem pretty unacceptable at work:

  • It uses the service locator pattern instead of dependency injection. I’m not a fan of this in general.
  • I really hate the name Factory – but I haven’t found anything significantly better, yet. (ControllerProvider? I’d call it just Controllers, but that’s the final part of the namespace name…)
  • There are no tests. At all. Not even a test project.
  • There are only a few comments.
  • The IP addresses are hard-coded into Factory. No config files, no discovery, not even names – just IP addresses.
  • There’s no abstraction beyond IController and ICommand. I could potentially have an IVolumeController, IMusicController, ISourceController etc.

None of these bother me, even though the code is “in production” and I’m expecting to use it for a long time. It’s never going to grow large enough for the service locator pattern to be a problem. With so few types involved, a few non-ideal names isn’t going to cause much of a problem. The only tests that matter are the ones involving me saying “shed amplifier on” and the amplifier either turning on or not… there’s very little code here that’s really testable anyway. My device IP addresses are all fixed by my router, so I’d only have to change them if I change that – and I’d still end up changing it in just one place. Extra abstraction wouldn’t actually give me any benefits at the moment.

So yes, basically I’m happy with the code now. It provides me value, and it’s easy to maintain. In particular, adding extra controllers or commands is trivial. I guess what I’m saying is that this is a reminder that not all code is “enterprise software” and even “best practice” rules such as writing no code without tests have their limitations. Context is king.

What next?

My Raspberry Pi 3 has a small touchscreen display on it, which uses the Rasperry Pi SPI for communication. I haven’t yet managed to get this working, but obviously that would be a lovely next step. It’s a bit of a pain changing from Displayport to HDMI to see the UI and check what phrases have been recognized, for example. The display part will definitely be useful – I might use the touch part just for a very few key commands, such as “stop the music, you can’t hear me any more!”

The device I’d most like to control next is the heater. I keep leaving the heating on accidentally, then having to put my shoes on again to go out and just turn the heating off. If the heater plugged in via a regular socket, it would be easy enough to sort out – but unfortunately the power cable goes straight into a box in the wall. I may try to sort this out at some point, but it’s going to be a pain.

The other thing I’d like to do is add the ability to switch monitor inputs using DDC/CI. That could be tricky in terms of getting access to such a low-level API, and also it requires a permanent “live” connection to the monitor – whereas both my HDMI and Displayport connections are switched (by the Onkyo for HDMI, and a KVM for Displayport). I’m still thinking about that one. I could potentially have a secondary output from the NUC to a DVI input on the monitor, then make the NUC listen as a server that the Pi could talk to…

Conclusion

Home automation is fun and simple – but it really, really helps to have a project which will actually be useful to you. I’ve had a few Raspberry Pis sitting around for ages waiting to be used. They’ve always been fun to play with, but now there’s a purpose, and that makes a huge difference…

To base() or not to base(), that is the question

Today I’ve been reviewing the ECMA-334 C# specification, and in particular the section about class instance constructors.

I was struck by this piece in a clause about default constructors:

If a class contains no instance constructor declarations, a default instance constructor is automatically provided. That default constructor simply invokes the parameterless constructor of the direct base class.

I believe this to be incorrect, and indeed it is, as shown here (in C# 6 code for brevity, despite this being the C# 5 spec that I’m reviewing; that’s irrelevant in this case):

using System;

class Base
{
    public int Foo { get; }

    public Base(int foo = 5)
    {
        Foo = foo;
    }
}

class Derived : Base
{    
}

class Test
{
    static void Main()
    {
        var d = new Derived();
        Console.WriteLine(d.Foo); // Prints 5
    }    
}

Here the default constructor in Derived clearly doesn’t execute a parameterless constructor in Base because there is no parameterless constructor in Base. Instead, it executes the parameterized constructor, providing the default argument value.

So, I considered whether we could reword the standard to something like:

If a class contains no instance constructor declarations, a default instance constructor is automatically provided. That default constructor simply invokes a constructor of the direct base class as if the default constructor contained a constructor initializer of base().

But is that always the case? It turns out it’s not – at least not in Roslyn. There are more interesting optional parameters we can use than just int foo = 5. Let’s have a look:

using System;
using System.Runtime.CompilerServices;

class Base
{
    public string Origin { get; }

    public Base([CallerMemberName] string name = "Unspecified",
                [CallerFilePath] string source = "Unspecified",                
                [CallerLineNumber] int line = -1)
    {
        Origin = $"{name} - {source}:{line}";
    }
}

class Derived1 : Base {}
class Derived2 : Base
{
    public Derived2() {}
}
class Derived3 : Base
{
    public Derived3() : base() {}
}

class Test
{
    static void Main()
    {
        Console.WriteLine(new Derived1().Origin);
        Console.WriteLine(new Derived2().Origin);
        Console.WriteLine(new Derived3().Origin);
    }    
}

The result is:

Unspecified - Unspecified:-1
Unspecified - Unspecified:-1
.ctor - c:\Users\Jon\Test\Test.cs:23

When base() is explicitly specified, that source location is treated as the “caller” for caller member info attributes. When it’s implicit (including when there’s a default constructor), no source location is made available to the Base constructor.

This is somewhat compiler-specific – and I can imagine different results where the default constructor could specify a name but not source file or line number, and the declared constructor with an implicit call could specify the name and source file but no line number.

I would never suggest using this little tidbit of Roslyn implementation trivia, but it’s fun nonetheless…

“Sideways overriding” with partial methods

First note: this blog post is very much tongue in cheek. I’m not actually planning on using the idea. But it was too fun not to share.

As anyone following my activity on GitHub may be aware, I’ve been quite a lot of work on Protocol Buffers recently – in particular, a mostly-new port for proto3. I’ve recently been looking at JSON support, and thinking about how to implement “overriding” ToString() for a few well-known types. I generate partial classes, so that gives me a hook to provide extra functionality. Indeed, I’m planning on using this to provide conversion methods for Timestamp and Duration, for example. However, you can’t really override anything in partial methods.

Refresher on partial methods

While partial classes were introduced in C# 2, partial methods were introduced in C# 3. The idea is that one source file (usually the generated one) can provide a partial method signature, and another source file (usually the manually-written one) can provide an implementation if it wants to. Any part of the source can call the method, and the call will be removed at compile-time if nothing provides an implementation. The fact that the method may not be there leads to some limitations:

  • Partial methods are implicitly private, but you can’t specify an access modifier explicitly
  • Partial methods are always void – they can’t return any values
  • Partial methods cannot have out parameters

(Interestingly, a partial method implementation can be an async method – but with a return type of void, which is never a nice situation to be in.)

There’s more in the spec, but the last two bullets are the important part.

So, suppose I want to override ToString() in the generated code, but provide a mechanism for that override to be “further overridden” effectively, in the manual code for the same class? How do I get the value from an “extra override”? How do I even detect whether or not it’s there?

Side effects to the rescue!

(Now there’s a phrase you never thought you’d hear from me.)

I mentioned before that if a partial method is called but no implementation is provided, the call is removed. That includes all aspects of the call – including the evaluation of the arguments. So if evaluating the argument has a side-effect… we can spot that side effect.

Next, we have to work out how to get a value back from a method. We can’t use the return value, and we can’t use an out parameter. There are two options here: we could either pass a wrapper (e.g. an array with a single element) and allow the “extra override” to populate the wrapper… or we can use a ref parameter. The latter feels ever-so-slightly cleaner to me.

And so the ugly hack is born. The code generator can always generate code like this:

partial void ToStringOverride(bool ignored, ref string value);

public override string ToString()
{
    string value = null;
    bool overridden = false;
    ToStringOverride(overridden = true, ref value);
    return overridden ? value : "Original";
}

For any partial class where the ToStringOverride method isn’t implemented, overridden will still be false, so we’ll fall back to returning "Original". (I would hope that any decent JIT would remove the overridden and value local variables entirely at that point.) Otherwise, we’ll return whatever the method has changed value to.

Here’s a short but complete example:

using System;

// Generated code
partial class UglyHack1
{
    partial void ToStringOverride(bool ignored, ref string value);

    public override string ToString()
    {
        string value = null;
        bool overridden = false;
        ToStringOverride(overridden = true, ref value);
        return overridden ? value : "Original";
    }
}

// Generated code
partial class UglyHack2
{
    partial void ToStringOverride(bool ignored, ref string value);

    public override string ToString()
    {
        string value = null;
        bool overridden = false;
        ToStringOverride(overridden = true, ref value);
        return overridden ? value : "Original";        
    }    
}

// Manual code
partial class UglyHack2
{
    partial void ToStringOverride(bool ignored, ref string value)
    {
        value = "Different!";
    }
}

class Test
{
    static void Main()
    {
        var g1 = new UglyHack1();
        var g2 = new UglyHack2();

        Console.WriteLine(g1);
        Console.WriteLine(g2);
    }
}

Horribly ugly, but it works…

Alternatives?

Obviously this isn’t really pleasant. Some alternatives:

  • Derive from the generated class in order to override ToString again. Doesn’t work with sealed classes, and will only work if clients create instances of the derived class.
  • Introduce a new interface, and allow manual code to implement it on the partial class. The ToString method can then check this is IMyOtherToString or whatever, and call it appropriately. This introduces another virtual call for no great reason, and exposes the interface to the outside world, which we may not want to do.
  • Don’t override ToString in the generated code at all. Not good if you normally want to override it.
  • Introduce an abstract base class which the generated class derives from. Override ToString() in that base class, possibly calling an abstract member which is then provided in the generated class – but allowing the manual code to override ToString() again.

Conclusion

Ugly hacks are fun. But it’s much better to keep them where it belongs: in a blog post, not in production code.

Backwards compatibility is (still) hard

At the moment, I’m spending a fair amount of time thinking about a new version of the C# API and codegen for Protocol Buffers, as well as other APIs for interacting with Google services. While that’s the context for this post, I want to make it very clear that this is still a personal post, and should in no way be taken to be “Google’s opinion” on anything. The underlying issue could apply in many other situations, but it’s easiest to describe a concrete scenario.

Context and current state: the builder pattern

The problem I’ve been trying to address is the relative pain of initializing a protobuf message. Protocol buffer messages are declared in a separate schema file (.proto) and then code is generated. The schema declares fields, each of which has a name, a type and a number associated with it. The generated message types are immutable, with builder classes associated with them. So for example, we might start off with a message like this:

message Person {
  string first_name = 1;
  string last_name = 3;
}

And construct a Person object in C# like this:

var person = new Person.Builder { FirstName = "Jon", LastName = "Skeet" }.Build();
// Now person.FirstName and person.LastName are readonly properties

That’s not awful, but it’s not the cleanest code in the world. We can make it slightly simpler using an implicit conversion from the builder type to the message type:

Person person = new Person.Builder { FirstName = "Jon", LastName = "Skeet" };

It’s still not really clean though. Let’s revisit why the builder pattern is useful:

  • We can specify just the properties we want.
  • By deferring the “build” step until after we’ve specified everything, we get mutability without building a lot of intermediate objects.

If only there were another language construct allowing that…

Optional parameters to the rescue!

If we provided a constructor with an optional parameter for each property, we can specify just what we want. So something like:

public Person(string firstName = null, string lastName = null)
...
var person = new Person(firstName: "Jon", lastName: "Skeet");

Hooray! That looks much nicer:

  • We can use var (if we want to) because there are no implicit conversions to confuse things.
  • We don’t need to mention a builder at all.
  • Every piece of text in the statement is something we want to express, and we only express it once.

That last point is a lovely place to be in terms of API design – while you still need to worry about naming, ordering and how the syntax fits into bigger expressions, you’ve achieved some sense of “as simple as possible, but no simpler”.

So, that’s all great – except for versioning.

Let’s just add a field at the end…

One of the aims of protocol buffers is to support an evolving schema. (The limitations are different for proto2 and proto3, but that’s a slightly different matter.) So what happens if we add a new field to the message?

message Person {
  string first_name = 1;
  string last_name = 3;
  string title = 4; // Mr, Mrs etc
}

Now we end up with the following constructor:

public Person(string firstName = null, string lastName = null, string title = null)

The code still compiles – but if we try to use run our old client code against the new version of the library, it will fail – because the method it refers to no longer exists. So we have source compatibility, but not binary compatibility.

Let’s just add a field in the middle…

You may have noticed that I don’t have a field with tag 2 – this is not an accident. Suppose we now add it, for the obvious middle_name field:

message Person {
  string first_name = 1;
  string middle_name = 2;
  string last_name = 3;
  string title = 4; // Mr, Mrs etc
}

Regenerate the code, and we end up with a constructor with 4 parameters:

public Person(
    string firstName = null,
    string middleName = null,
    string lastName = null,
    string title = null)

Just to be clear, this change is entirely fine in protocol buffers – while normally fields are assigned incrementally, it shouldn’t be a breaking change to add a new field “between” existing ones.

Let’s take a look at our client code again:

var person = new Person(firstName: "Jon", lastName: "Skeet");

Yup, that still works – we need to recompile, but we still end up with a Person with the right properties. But that’s not the only code we could have started with. Suppose we’d actually had:

var person = new Person("Jon", "Skeet");

Until this last change that would have been fine – even after we’d added the optional title parameter, the two arguments would still have mapped to firstName and lastName respectively.

Having added the middle_name field, however, the code would still compile with no errors or warnings, but the meaning of the second argument would have changed – it would now map onto the middleName parameter instead of lastName.

Basically, we’d like to stop this code (using positional arguments) from compiling in the first place.

Feature requests and a workaround

The two features we really want from C# here are:

  • Some way of asking the generated code to perform dynamic overload resolution at execution time… not based on dynamic values, but on the basis that the code we’re compiling against may have changed since we compiled. This resolution only needs to be performed once, on first execution (or class load, or whatever) as by the time we’re executing, everything is fixed (the parameter names and types, and the argument names and types). It could be efficient.
  • Some way of forcing any call sites to use named arguments for any optional parameters. (Even though in our case all the parameters are optional, I can easily imagine a case where there are a few required parameters and then the optional ones. Using positional arguments for those required parameters is fine.)

It’s hard (without forking Roslyn :) to implement these feature requests ourselves, but for the second one we can at least have a workaround. Consider the following struct:

public struct DoNotCallThisMethodWithPositionalArguments {}

… and now imagine our generated constructor had been :

public Person(
    DoNotCallThisMethodWithPositionalArguments ignoreMe =
        default(DoNotCallThisMethodWithPositionalArguments),
    string firstName = null,
    string middleName = null,
    string lastName = null,
    string title = null)

Now our constructor call using positional arguments will fail, because there’s no conversion from string to the crazily-named struct. The only “nice” way you can call it is to use named arguments, which is what we wanted. You could call it using positional arguments like this:

var person = new Person(
    new DoNotCallThisMethodWithPositionalArguments(),
    "Jon",
    "Skeet");

(or using default(...) like the constructor declaration) – but at this point the code looks broken, so it’s your own fault if you decide to use it.

The reason for making it a struct rather than a class is to avoid null being convertible to it. Annoyingly, it wouldn’t be hard to make a class that you could never create an actual instance of, but you can’t prevent anyone from creating a value of a struct. Basically, what we really want is a type such that there is no valid expression which is convertible to that type – but aside from static classes (which can’t be used as parameter types) I don’t know of any way of doing that. (I don’t know what would happen if you compiled the Person class using a non-static class as the first parameter, then made that class static and recompiled it. Confusion on the part of the C# compiler, I should think.)

Another option (as mentioned in the comments) is to have a “poor man’s” version of the compiler enforcement via a Roslyn Code Diagnostic – add an attribute to any method call where you want “All optional parameters must be specified with named arguments” to apply, and then make the code diagnostic complain if you disobey that. That diagnostic could ship with the Protocol Buffers NuGet package, which would make for a pretty nice experience. Not quite as good as a language feature though :)

Conclusion

Default parameters are a pain in terms of compatibility. For internal company code, it’s often reasonable to only care about source compatibility as you can recompile all calling code before deployment – but for open source projects, binary compatibility within the same major version is the norm.

How useful and common do I think these features would be? Probably not common enough to meet the bar – unless there’s encouragement within comments here, in which case I’m happy to file feature requests on GitHub, of course.

As it happens, I’m currently looking at radical changes to the C# implementation of Protocol Buffers, regretfully losing the immutability aspect due to it raising the barrier to entry. It’s not quite a done deal yet, but assuming that goes ahead, all of this will be mostly irrelevant – for Protocol Buffers. There are plenty of other places where code generation could be more robustly backward-compatible through judicious use of optional-but-please-use-named-arguments parameters though…