Category Archives: Diagnostics

Election 2029: The Impossible Exception – Solved

Shortly after writing my previous post, a colleague pinged me to say she’d figured out what was wrong – at least at the most immediate level, i.e. the exception itself. Nothing is wrong with the ordering code – it’s just that the exception message is too easy to misread. She’s absolutely right, and I’m kicking myself.

Here’s the exception message again:

Incorrect ordering for PredictionSets: mic-01 should occur before focaldata-01

and the code creating that:

string currentText = selector(current);
string nextText = selector(next);
if (StringComparer.Ordinal.Compare(currentText, nextText) >= 0)
{
    throw new InvalidOperationException(
        $"Incorrect ordering for {message}: {currentText} should occur before {nextText}");
}

In my previous post, I then claimed:

The exception message implies that at the time of the exception, the value of currentText was "focaldata-01", and the value of nextText was "mic-01".

No, it doesn’t! It implies the exact opposite, that the value of currentText was "mic-01", and the value of nextText was "focaldata-01"… in other words, that the data was genuinely wrong.

Sigh. Even while constantly thinking “when my code misbehaves, it’s almost always my fault” I’m still not capable of really taking a step back and double-checking my logic properly.

But this is odd, right? Because the data that had previously been invalid (20:15:57) magically “became” valid later (20:26:22), right? That’s what I claimed in the last post. I should have looked at the logs more carefully… a new instance was started at 20:22:58. That new instance loaded the data correctly, so reloading the already-valid data was fine.

What was actually wrong?

I’ve started writing this post before actually fixing the code, but I’m now sure that the problem is with “partial” reload – adding a new prediction set to the database and then reloading the data from a storage system that already has the existing data in its cache. This should be relatively easy to test –

First, it’s worth fixing that message. Instead of talking about what “should occur” let’s say what actually is the case, with the index into the collection at which things go wrong:

foreach (var (index, (current, next)) in source.Zip(source.Skip(1)).Index())
{
    string currentText = selector(current);
    string nextText = selector(next);
    if (StringComparer.Ordinal.Compare(currentText, nextText) >= 0)
    {
        throw new InvalidOperationException($"Incorrect ordering: {message}[{index}]={currentText}; {message}[{index + 1}]={nextText}");
    }
}

Next, let’s add another level of checking when uploading new data: as well as reloading twice from a clean start, let’s add a “before then after” reload. The code for this isn’t interesting (although it was fiddly for dependency-injection reasons). Then just test adding a “definitely first” prediction set with an ID of “aaaa”…

Hooray, I’ve reproduced the problem!

Incorrect ordering: PredictionSets[4]=name-length; PredictionSets[5]=aaaa

After that, it didn’t take very long (with a bit more logging) to find the problem. Once I’d found it, it was really easy to fix. Without going into too much unnecessary detail, I was corrupting my internal mappings from “hash to full data” when combining new and old mappings.

var predictionSetsByHash = newHashes.Concat(currentHashes)
    .Zip(currentPredictionSets.Concat(newPredictionSets))
    .ToOrdinalDictionary(pair => pair.First, pair => pair.Second);

should have been:

var predictionSetsByHash = newHashes.Concat(currentHashes)
    .Zip(newPredictionSets.Concat(currentPredictionSets))
    .ToOrdinalDictionary(pair => pair.First, pair => pair.Second);

This would only ever be a problem when loading a context with a new prediction set, when we previously had a prediction set.

This is where my election site not having many automated tests (these would have had to be integration tests rather than unit tests, probably) fell short… although it’s one of the few times that’s been the case, to be fair to myself.

It’s probably time to start writing some more tests – especially as in this case, this was a whole context storage system that had been rewritten in the early hours of the morning.

Conclusion

So, a few lessons learned:

  • Yup, when my code misbehaves, it’s almost always my fault. Even when I stare at it and think I’ve somehow found something really weird.
  • I should write more tests.
  • It’s really important to make exception messages as unambiguous as possible.
  • I should always listen to Amanda.

Election 2029: An Impossible Exception

I really thought I’d already written a first blog post about my Election 2029 site (https://election2029.uk) but I appear to be further behind on my blogging than I’d thought. This is therefore a little odd first post in the series, but never mind. To some extent it isn’t particularly related to the election site, except that a) this is relatively modern C# compared wtih most of my codebases; b) it will explain the two strings in question.

I cannot stress enough: when my code misbehaves, it’s almost always my fault. (I’ve blogged about this before.)

But on Thursday, I saw an exception I can’t explain. This post will give a bit of background, show you the code involved and the exception message, and my next (pretty weak) step. The purpose of this post is threefold:

  • It’s a sort of act of humility: if I’m going to post when I’m pleased with myself for diagnosing a tricky problem, I should be equally content to post when I’m stumped.
  • It’s just possible that someone will have some insight into what’s going on, and add a comment.
  • If I ever do work out what happened, it’ll be good to have this post written at the time of the original problem to refer to.

When my code misbehaves, it’s almost always my fault.

First, a bit of background on how the site stores data.

Election data

I’ll go into a lot more detail about the general architecture of the site in future posts, but for the purposes of this post, you only need to know:

  • The data is stored in Firestore
  • Normal page requests don’t hit the database at all. Instead, it’s all held in memory, and occasionally (and atomically, per instance) updated. The type containing all the data (in a relatively “raw” form) is called ElectionContext, and is fully immutable.
  • During development, there’s a manual “reload current data” page that I use after I know I’ve updated the database. (I’ll explain more about my reloading plans in another post.)
  • When the ElectionContext is reloaded, it performs some validation – if it’s not valid, the reload operation fails and the previously-loaded data continues to be used. (This doesn’t help if a new instance is started, of course.) Part of the validation is checking that certain collections are in an expected order.

Two of the collections in the context are data providers and prediction sets. On Thursday I added a new data provider (Focaldata) and their first prediction set.

I always add data first to my local test environment (which uses both Firestore and a file-based version), my staging environment, and finally production. When I update the ElectionContext, I validate it before storing it, then fetch it from scratch twice, validating it both times and then checking that a) the first fetched context is equal to the context that I stored, and that the second fetched context is equal to the first fetched context.

When my code misbehaves, it’s almost always my fault.

The code and exception

This is the code used to check the order of a collection within the ElectionContext.Validate() method:

void ValidateOrdering<T>(
    IEnumerable<T> source,
    Func<T, string> selector,
    [CallerArgumentExpression(nameof(source))] string? message = null)
{
    foreach (var (current, next) in source.Zip(source.Skip(1)))
    {
        string currentText = selector(current);
        string nextText = selector(next);
        if (StringComparer.Ordinal.Compare(currentText, nextText) >= 0)
        {
            throw new InvalidOperationException(
                $"Incorrect ordering for {message}: {currentText} should occur before {nextText}");
        }
    }
}

And specifically for prediction sets:

ValidateOrdering(PredictionSets, ps => ps.Id);

The PredictionSets property is an ImmutableList<PredictionSet> and PredictionSet is a record:

public sealed record PredictionSet(string Id, /* many other properties */)

So basically the code is checking that the IDs of all the prediction sets are in order, when sorted with an ordinal string comparison. (My initial thought after seeing the exception was that there was some bizarre culture involved and that I was performing a culture-specific ordering check – but no, it’s ordinal.)

When I pushed the new data, both test and staging environments were fine. When I hit the reload page in production, the reload failed, and this exception was in the logs:

System.InvalidOperationException:
   Incorrect ordering for PredictionSets: mic-01 should occur before focaldata-01
   at Election2029.Models.ElectionContext.<Validate>g__ValidateOrdering [...]

Before we look in more detail at the exception itself, the logs show five requests to the reload page:

  • At 20:15:38, the new data hadn’t yet been stored: the reload page found there was no new data, and succeeded
  • At 20:15:45, 20:15:49 and 20:15:57 the reload operation failed (with the exception above each time)
  • At 20:26:22, the reload operation succeeded

Interestingly, the storage component of the reload operation looks for new data and returns its cached version if there isn’t any; it’s the component one level up that performs validation. (This cached version isn’t the “serving” context – it’s just in the storage component. So during this period the site kept serving the previous data with only a single prediction set.) In this case, the logs show:

  • 20:15:38: No new data, so cache returned – then validated successfully
  • 20:15:45: New data loaded, then failed validation
  • 20:15:49: No new data, so cached context returned, then failed validation
  • 20:15:57: No new data, so cached context returned, then failed validation
  • 20:26:22: No new data, so cached context returned – then validated successully

In other words, the same data that failed validation three times was then validated successfully later. The log entries are quite explicit about not loading anything – so it doesn’t appear to be a problem in the storage component, where the data was loaded incorrectly first, then loaded correctly later. I say “doesn’t appear to be” because when my code misbehaves, it’s almost always my fault. (And appearances can be deceptive. Basically, I’m suspicious of any deductions until I’ve got a way of reproducing the problem.)

Let’s look at the code and exception again.

What do we “know” / suspect?

In the exception, mic-01 and focaldata-01 are the IDs of the two prediction sets. In production, these are the only two prediction sets, and those are the correct IDs. The exception message implies that at the time of the exception, the value of currentText was "focaldata-01", and the value of nextText was "mic-01". Those values make sense in that I would have expected "focaldata-01" to come before "mic-01".

In other words, it looks like the data was right, but the check was wrong. In other words, with the arguments substituted with the actual (apparent) values it looks like this expression evaluated to true:

// How could this possibly be true?
StringComparer.Ordinal.Compare("focaldata-01", "mic-01") >= 0

To cover some normal bases, even if they wouldn’t easily explain the exception:

  • The use of source.Zip(source.Skip(1)) might look worrying in terms of the collection changing, but source is an ImmutableList<PredictionSet>.
  • The type of PredictionSet.Id is string – so that’s immutable.
  • PredictionSet itself is immutable, so Id can’t change over time.
  • Most importantly, currentText and nextText are both local variables in the loop.

When my code misbehaves, it’s almost always my fault. But in this case I really, really can’t understand how it could be.

I’m left thinking that of two options:

  • The ordinal string comparer has a bug which makes the result non-deterministic (don’t forget this same check passed on the same machine minutes later).
  • The JIT compiler has a bug which meant that the arguments weren’t evaluated properly – either they were passed in the wrong order, or perhaps either the second argument or both arguments evaluated to null for some reason, but then were properly evaluated for the string formatting in the exception.

Neither of these seems particularly likely, to be honest. The second seems a little more likely given that I’m already aware of a JIT compiler issue in .NET 9 which has affected some customers from my Google Cloud client library work. I don’t understand the linked issue well enough to judge whether it would explain the exception though.

What’s next?

I can’t reproduce the problem in any environment. The only aspect I can think of to improve the diagnostics a bit, to rule out non-printable characters, is to log the length of each string as part of the exception:

throw new InvalidOperationException(
    $"Incorrect ordering for {message}: {currentText} (length {currentText.Length}) should occur before {nextText} (length {nextText.Length})");

It’s not much, but it’s all I’ve got at the moment.

My guess is that I’ll never know what happened here. That I’ll never see the exception again, but also never be able to reproduce it in a “before” state in order to know that it’s been fixed. All somewhat unsatisfying – but at least interesting. Oh, and I absolutely still have faith that when my code misbehaves, it’s almost always my fault.

Update – solved!

Thanks to a smarter colleague, this mystery has now been solved. While she didn’t have enough context to know where the problem actually was, she was able to pick up where my reasoning went wrong above.

No, the bug is in your code (and mine)

It’s entirely possible that I’ve posted something on this topic before. I know I’ve posted about it on social media before.

Every so often – thankfully not too often – I see a post on Stack Overflow containing something like this:

  • “This looks like a bug in VS.NET”
  • “I’m 100% sure my code is correct”
  • “This seems like a glaring bug.”
  • “Is this a bug in the compiler?”

The last of these is at least phrased as a question, but usually the surrounding text makes it clear that the poster expects that the answer is “yes, it’s a bug in the compiler.”

Sometimes, the bug really is in the library you’re using, or in the JIT compiler, or the C# or Java compiler, or whatever. I’ve reported plenty of bugs myself, including some fun ones I’ve written about previously to do with camera firmware or a unit test that only failed on Linux. But I try to stay in the following mindset:

When my code doesn’t behave how I expect it to, my first assumption is that I’ve gone wrong somewhere.

Usually, that assumption is correct.

So my first steps when diagnosing a problem are always to try to make sure I can actually reproduce the problem reliably, then reproduce it easily (e.g. without having to launch a mobile app or run unit tests on CI), then reproduce it briefly (with as little code as possible). If the problem is in my code, these steps help me find it. If the problem is genuinely in the compiler/library/framework then by the time I’ve taken all those steps, I’m in a much better place to report it.

But hold on: just because I’ve managed to create a minimal way of reproducing the problem doesn’t mean I’ve definitely found a bug. The fault still probably lies with me. At this point, the bug isn’t likely to be in my code in the most common sense (at least for me) of “I meant to do X, but my code clearly does Y.” Instead, it’s more likely that the library I’m using behaves differently to my expectations by design, or the language I’m using doesn’t work the way I expect, even though the compiler’s behaving as specified1.

So the next thing I do is consult the documentation: if I’ve managed to isolate it to a single method not behaving as expected, I’ll read the whole documentation for that method multiple times, making sure there isn’t some extra note or disclaimer that explains what I’m seeing. I’ll look for points of ambiguity where I’ve made assumptions. If it’s a compiler not behaving as expected, I’ll try to isolate the one specific line or expression that confounds my expectation, dig out the specification and look at every nook and cranny. I may well take notes during this stage, if there’s more to it than can easily fit in my head at one time.

At this point, if I still don’t understand the behaviour I’m seeing, it may genuinely be a bug in someone else’s code. But at this point, I’ve not only got a minimal example to post, but I’ve also got a rationale for why I believe the code should behave differently. Then, and only then, I feel ready to report a bug – and I can do so in a way which makes the maintainer’s job as easy as possible.

But most of the time, it doesn’t end up that way – because most of the time, the bug is in my code, or at least in my understanding. The mindset of expecting that the bug is in my code usually helps me find that bug much more quickly than if my expectation is a compiler bug.

There’s one remaining problem: communicating that message without sounding patronising. If I tell someone that the bug is probably in their code, I’m aware it sounds like I’m saying that because I think I’m a better at writing code than they are. That’s not it at all – if I see unexpected behaviour, that’s probably a bug in my code. That’s one of the reasons for writing this post: I’m hoping that by linking to this in Stack Overflow comments, I’ll be able to convey the message a little more positively.


1 This still absolutely happens with C# – and I’ve stopped feeling bad about it. I convene the ECMA task group for standardizing C#. This includes folks whose knowledge of C# goes way deeper than mine, including Mads Torgersen, Eric Lippert, Neal Gafter and Bill Wagner. Even so, in many of our monthly meetings, we find some behaviour that surprises one or all of us. Or we just can’t agree on what the standard says the compiler should be doing, even with the standard right in front of us. It’s simultaneously humbling, exhilarating and hilarious.

Diagnosing an ASP.NET Core hard crash

As part of my church A/V system (At Your Service), I run a separate local web server to interact with the Zoom SDK. Initially this was because the Zoom SDK would only run in 32-bit processes and I needed a 64-bit process to handle the memory requirements for the rest of the app. However, it’s also proven useful in terms of keeping the Zoom meeting for a church service alive if At Your Service crashes. Obviously I try hard to avoid that happening, but when interoperating with a lot of native code (LibVLC, NDI, the Stream Deck, PowerPoint via COM) there are quite a few avenues for crashes. The web server runs ASP.NET Core within a WPF application to make it easy to interact with logs while it’s running, and to give the Zoom SDK a normal event dispatcher.

Yesterday when trying to change my error handling code significantly, I found that the web server was crashing hard, with no obvious trace of what’s going on. I’ve already spent a little time trying to figure out what’s going on, but I couldn’t get to the bottom of it. I know the immediate cause of the crash, and I’ve fixed that fairly easily – but I want to harden the web server against any further bugs I might introduce. I figured it would be useful to blog about that process as I went along.

What I know so far

The immediate crash was due to an exception being thrown in an async void method.

Relevant threading aspects:

  • I start the ASP.NET Core app in a separate thread (although that’s probably unnecessary anyway, now that I think about it) calling IHost.Start
  • I have handlers for Dispatcher.UnhandledException and TaskScheduler.UnobservedTaskException
  • I execute all Zoom-specific code on the WPF Dispatcher thread

The immediate error came from code like the following. You can ignore the way this effectively reproduces Web API to some extent… it’s the method body that’s important.

public abstract class CommandBase<TRequest, TResponse> : CommandBase
{
    public override async Task HandleRequest(HttpContext context)
    {
        var reader = new StreamReader(context.Request.Body);
        var text = await reader.ReadToEndAsync();
        var request = JsonUtilities.Parse<TRequest>(text);

        var dispatcher = Application.Current.Dispatcher;
        try
        {
            var response = await dispatcher.Invoke(() => ExecuteAsync(request));
            var responseJson = JsonUtilities.ToJson(response);
            await context.Response.WriteAsync(responseJson);
        }
        catch (ZoomSdkException ex)
        {
            SetExceptionResponse(new ZoomExceptionResponse { /* More info here */ });
        }

        async void SetExceptionResponse(ZoomExceptionResponse response)
        {
            var responseJson = JsonUtilities.ToJson(response);
            await context.Response.WriteAsync(responseJson);
            context.Response.StatusCode = 500;
        }
    }

    public abstract Task<TResponse> ExecuteAsync(TRequest request);
}

There are at least three problems here:

  • I’m trying to set HttpResponse.StatusCode after writing the body
  • The SetExceptionResponse method is async void (generally a bad idea)
  • I’m not awaiting the call to SetExceptionResponse (which I can’t, due to it returning void)

(It’s also a bit pointless having a local method there. This code could do with being rewritten when I don’t have Covid brain fog, but hey…)

The first of these causes an InvalidOperationException to be thrown. The second and third, between them, cause the app to crash. The debugger has been no help here in working out what’s going on.

Create with a console app to start ASP.NET Core

It feels like this should be really easy to demonstrate in a simple console app that does nothing but start a web server which fails in this particular way.

At this stage I should say how much I love the new top-level statements in C# 10. They make simple complete examples an absolute joy. So let’s create a console app, change the Sdk attribute in the project file to Microsoft.NET.Sdk.Web, and see what we can do. I’m aware that with ASP.NET Core 6 there are probably even simpler ways of starting the server, but this will do for now:

using System.Net;

var host = Host.CreateDefaultBuilder()
    .ConfigureWebHostDefaults(builder => builder
        .ConfigureKestrel((context, options) => options.Listen(IPAddress.Loopback, 8080))
        .Configure(application => application.Run(HandleRequest)))
    .Build();
host.Start();
host.WaitForShutdown();

async Task HandleRequest(HttpContext context)
{
    await context.Response.WriteAsync("Testing");
}

Trying to run that initially brought up prompts about IIS Express and trusting SSL certificates – all very normal for a regular web app, but not what I want here. After editing launchSettings.json to a simpler set of settings:

{
"profiles": {
"AspNetCoreCrash": {
"commandName": "Project"
}
}
}

… I can now start the debugger, then open up localhost:8080 and get the testing page. Great.

Reproduce the exception

Next step: make sure I can throw the InvalidOperationException in the same way as the original code. This is easy, just replacing the body of the HandlRequest method:

async Task HandleRequest(HttpContext context)
{
    await context.Response.WriteAsync("Testing");
    context.Response.StatusCode = 500;
}

Sure enough the console logs show that it’s failed as expected:

System.InvalidOperationException: StatusCode cannot be set because the response has already started.
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ThrowResponseAlreadyStartedException(String value)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.set_StatusCode(Int32 value)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.Microsoft.AspNetCore.Http.Features.IHttpResponseFeature.set_StatusCode(Int32 value)
  at Microsoft.AspNetCore.Http.DefaultHttpResponse.set_StatusCode(Int32 value)
  at Program.<<Main>$>g__HandleRequest|0_1(HttpContext context) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 19
  at Microsoft.WebTools.BrowserLink.Net.BrowserLinkMiddleware.ExecuteWithFilterAsync(IHttpSocketAdapter injectScriptSocket, String requestId, HttpContext httpContext)
  at Microsoft.AspNetCore.Watch.BrowserRefresh.BrowserRefreshMiddleware.InvokeAsync(HttpContext context)
  at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

… but (again, as expected) the server is still running. It’s interesting that BrowserLink occurs in the stack trace – I suspect that wouldn’t be the case in my

Let’s try making the failure occur in the same way as in At Your Service:

async Task HandleRequest(HttpContext context)
{
    // In AYS we await executing code in the dispatcher;
    // Task.Yield should take us off the synchronous path.
    await Task.Yield();
    WriteError();

    async void WriteError()
    {
        await context.Response.WriteAsync("Testing");
        context.Response.StatusCode = 500;
    }
}

This time we get a longer stack trace, and the process quits, just like in AYS:

System.InvalidOperationException: StatusCode cannot be set because the response has already started.
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ThrowResponseAlreadyStartedException(String value)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.set_StatusCode(Int32 value)
   at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.Microsoft.AspNetCore.Http.Features.IHttpResponseFeature.set_StatusCode(Int32 value)
   at Microsoft.AspNetCore.Http.DefaultHttpResponse.set_StatusCode(Int32 value)
   at Program.c__DisplayClass0_0.<<<Main>$>g__WriteError|4>d.MoveNext() in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 19
--- End of stack trace from previous location ---
   at System.Threading.Tasks.Task.c.b__128_1(Object state)
   at System.Threading.QueueUserWorkItemCallback.c.b__6_0(QueueUserWorkItemCallback quwi)
   at System.Threading.ExecutionContext.RunForThreadPoolUnsafe[TState](ExecutionContext executionContext, Action`1 callback, TState& state)
   at System.Threading.QueueUserWorkItemCallback.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

This happens in both the debugger and when running from the command line.

Setting a break point in the WriteError method shows a stack trace like this:

   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
   at Program.c__DisplayClass0_0.<<Main>$>g__WriteError|4()
   at Program.<<Main>$>g__HandleRequest|0_1(HttpContext context) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\Program.cs:line 14
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.ExecutionContextCallback(Object s)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext(Thread threadPoolThread)
   at System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1.AsyncStateMachineBox`1.MoveNext()
   at System.Runtime.CompilerServices.YieldAwaitable.YieldAwaiter.c.b__6_0(Action innerContinuation, Task continuationIdTask)
   at System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke()
   at System.Runtime.CompilerServices.YieldAwaitable.YieldAwaiter.RunAction(Object state)
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

There’s nothing about ASP.NET Core in there at all… so I wonder if we can take that out of the equation too?

Reproducing the crash in a pure console app

To recap, I’m expecting at this stage that to reproduce the crash I should:

  • Write an async void method that throws an exception
  • Call that method from a regular async method

Let’s try:

#pragma warning disable CS1998 // Async method lacks 'await' operators and will run synchronously
await NormalAsyncMethod();
Console.WriteLine("Done");

async Task NormalAsyncMethod()
{
    await Task.Yield();
    Console.WriteLine("Start ofNormalAsyncMethod");
    BrokenAsyncMethod();
    Console.WriteLine("End of NormalAsyncMethod");
}

async void BrokenAsyncMethod()
{
    await Task.Yield();
    throw new Exception("Bang");
}

Hmm. That exits normally:

$ dotnet run
Start ofNormalAsyncMethod
End of NormalAsyncMethod
Done

But maybe there’s a race condition between the main thread finishing and the problem crashing the process? Let’s add a simple sleep:

#pragma warning disable CS1998 // Async method lacks 'await' operators and will run synchronously
await NormalAsyncMethod();
Thread.Sleep(1000);
Console.WriteLine("Done");
// Remainder of code as before

Yup, this time it crashes hard:

Start ofNormalAsyncMethod
End of NormalAsyncMethod
Unhandled exception. System.Exception: Bang
   at Program.<<Main>$>g__BrokenAsyncMethod|0_1() in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 15
   at System.Threading.Tasks.Task.<>c.<ThrowAsync>b__128_1(Object state)
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

Interlude: what about async Task?

At this point I’m remembering some of what I’ve learned about how async void methods handle exceptions. What happens if we turn it into an async Task method instead? At that point, the Task returned by the method (which we ignore) will have the exception, and as by default unobserved task exceptions no longer crash the process, maybe we’ll be okay. So just changing BrokenAsyncMethod to:

async Task BrokenAsyncMethod()
{
    throw new Exception("Bang");
}

(and ignoring the warning at the call site)… the program no longer crashes. (I could subscribe to TaskScheduler.UnobservedTaskException but I’m not that bothered… I’m pretty convinced it would fire, at least eventually.)

Do all ThreadPool exceptions crash the app?

We don’t need to use async methods to execute code on the thread pool. What happens if we just write a method which throws an exception, and call that from the thread pool?

ThreadPool.QueueUserWorkItem(ThrowException);
Thread.Sleep(1000);
Console.WriteLine("Done");

void ThrowException(object? state)
{
    throw new Exception("Bang!");
}

Yup, that crashes:

Unhandled exception. System.Exception: Bang!
   at Program.<<Main>$>g__ThrowException|0_0(Object state) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 7
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

At this point some readers (if there still are any…) may be surprised that this is a surprise to me. It’s been a long time since I’ve interacted with the thread pool directly, and taking down the process like this feels a little harsh to me. (There are pros and cons, certainly. I’m not trying to argue that Microsoft made the wrong decision here.)

Can we change the ThreadPool behaviour?

Given that we have things like TaskScheduler.UnobservedTaskException, I’d expect there to be something similar for the thread pool… but I can’t see anything. It looks like this is behaviour that changed with .NET 2.0 – back in 1.x, thread pool exceptions didn’t tear down the application.

After a bit more research, I found AppDomain.UnhandledException. This allows us to react to an exception that’s about to take down the application, but it doesn’t let us mark it as “handled”.

Here’s an example:

AppDomain.CurrentDomain.UnhandledException += (sender, args) =>
    Console.WriteLine($"Unhandled exception: {((Exception)args.ExceptionObject).Message}");
ThreadPool.QueueUserWorkItem(ThrowException);
Thread.Sleep(1000);
Console.WriteLine("Done");

void ThrowException(object? state) =>
    throw new Exception("Bang!");

Running this code a few times, I always get output like this:

Unhandled exception: Bang!
Unhandled exception. System.Exception: Bang!
   at Program.<<Main>$>g__ThrowException|0_1(Object state) in C:\users\skeet\GitHub\jskeet\DemoCode\AspNetCoreCrash\ConsoleCrash\Program.cs:line 8
   at System.Threading.QueueUserWorkItemCallbackDefaultContext.Execute()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()
   at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
   at System.Threading.Thread.StartCallback()

… but sometimes “Done” is printed too. So I guess there’s some uncertainty about how quickly the AppDomain is torn down.

Hardening At Your Service

Given what I know now, I don’t think I can easily stop the web server for Zoom interactions from terminating if I have a bug – but I can make it easier to find that bug afterwards. I already normally write the log out to a text file when the app exits, but that only happens for an orderly shutdown.

Fortunately, it looks like the AppDomain.UnhandledException is given enough time to write the log out before the process terminates. Temporarily reverting to the broken code allows me to test that – and yes, I get a log with an appropriate critical error.

Conclusion

I knew that async void methods were generally bad, but I hadn’t quite appreciated how dangerous they are, particularly when executed from a thread pool thread.

While I’m not thrilled with the final result, I at least understand it now, and can find similar errors more easily in the future. The “not understanding” part was the main motivation for this blog post – given that I’d already found the immediate bug, I could have just fixed it and ignored the worrying lack of diagnostic information… but I always find it tremendously unsettling when I can’t explain significant behaviour. It’s not always worth investigating immediately but it’s generally useful to come back to it later on and keep diving deeper until you’ve got to the bottom of it.

I haven’t put the source code for this blog post in GitHub as there are so many iterations – and because it’s all in the post itself. Shout if you would find it valuable, and I’m happy to add it after all.

Diagnosing a VISCA camera issue

As I have mentioned before, I’ve been spending a lot of time over the last two years writing code for my local church’s A/V system. (Indeed, I’ve been giving quite a few user group talks recently about the fun I’ve had doing so.) That new A/V system is called “At Your Service”, or AYS for short. I only mention that because it’s simpler to refer to AYS for the rest of this post than “the A/V app”.

Our church now uses three cameras from PTZOptics. They’re all effectively the same model: 30x optical Zoom, Power over Ethernet, with NDI support (which I use for the actual video capture – more about that in a subsequent blog post). I control the cameras using VISCA over TCP/IP, as I’ve blogged about before. At home I have one PTZOptics camera (again, basically the same camera) and a Minrray UV5120A – also 30x optical zoom with NDI support.

Very occasionally, the cameras run into problems: sometimes the VISCA interface comes up, but there’s no NDI stream; sometimes the camera just doesn’t stop panning after I move it. In the early days, when I would only send a VISCA command when I wanted to actually do something, I believe the TCP/IP connection was closed automatically after being idle for an hour.

For these reasons, I now have two connections per camera:

  • One for the main control, which includes a “heartbeat” command of “get current pan/tilt/zoom” issued every three minutes.
  • One for frequent status checking, so that we can tell if the camera itself is online even if the main control connection ends up causing problems. This sends a “get power status” every 5 seconds.

If anything goes wrong, the user can remotely reboot the camera (which creates a whole new TCP/IP connection just in case the existing ones are broken; obviously this doesn’t help if the camera is completely failing in network terms).

This all works pretty well – but my logs sometimes show an error response from the camera – equivalent to an HTTP 400, basically saying “the VISCA command you sent has incorrect syntax”. Yesterday I decided to dedicate some time to working out what was going on, and that’s the topic of this blog post. The actual problem isn’t terribly important – I’d be surprised if more than a handful of readers (if any!) faced the same issue. But I’m always interested in the diagnostic process.

Hack AYS to add more stress

I started off by modifying AYS to make it send more commands to the camera, in two different ways:

  • I modified the frequency of the status check and heartbeat
  • I made each of those also send 100 commands instead of just one
    • First sending the 100 commands one at a time
    • Then changed to sending them as 100 tasks started at the same time and then awaited as a bunch

This increased the rate of error – but not in any nicely reproducible way.

Looking back, I probably shouldn’t have started here. AYS does a lot of other stuff, including quite a lot of initialization on startup (VLC, OBS, Zoom) which makes the feedback loop a lot slower when experimenting. I needed something that was dedicated to just provoking the problem.

Create a dedicated stress test app

It’s worth noting at this point that I’ve got a library (source here) that I’m using to handle the TCP connections and send the commands. This keeps all the knowledge of exactly how VISCA works away from AYS, and makes it reasonably easy to write test code separately from the app. However, it does mean that when something goes wrong, my first expectation is that this is a problem in the library. I always assume my own code is broken before placing the blame elsewhere… and usually that assumption is well-founded.

Despite the changes to AYS not provoking things as much as I’d expected, I still thought concurrency would be the key. How could we overload the camera?

I wrote a simple stress test app to send lots of simple “get power status” requests at the camera, from multiple tasks, and record the number of successful commands vs ones which failed for various different reasons (camera responded with an error; camera violated VISCA itself; command timed out). Initially, I found that some tasks had issued far more requests than others – this turned out to be due to the way that the thread pool (used by tasks in a console app) starts small and gradually expands. A significant chunk of the stress test app is given over to getting all the tasks “ready to go” and releasing them all at the same time.

The stress test app is configurable in a number of ways:

  • The camera IP address and VISCA port
  • Whether to use a single controller object (which means a single TCP connection) or one per task. (The library queues concurrent requests, so any given TCP connection only has a single active command at a time.)
  • How long each task should delay between requests
  • How long each command should wait before timing out
  • How long the test should last overall

At first, the results seemed all over the place. With a single controller, and enough of a delay (or few enough tasks) to avoid commands timing out due to the queuing, I received no errors. With multiple controllers, I saw all kinds of results – some tasks never managing a single request, others with just a few errors, and lots of timeouts.

Oh, and if I overwhelmed the camera too much, it would just reset itself during the test (which involves it tilting up to the ceiling then back down to horizontal). Not quite “halt and catch fire”, but a very physically visible indication of an error. I’m pretty confident I haven’t actually damaged the camera though, and it seems fairly reasonable for it to reset itself in the face of a sort of DoS attack like this stress test.

However, I did spot that I never had a completely clean test with more than one task. I wondered: how low could I take the load and still see problems?

Even with just two tasks, and a half second delay between attempts, I’d reliably see one task have a single error response, and the other task have a command timeout. Aha – we’re getting somewhere.

On a hunch (so often the case with diagnostics) I added console output in the catch blocks to include the time at which the exception was thrown. As I’d started to suspect, it was the very first command that failed, and it failed for both tasks. Now we’re getting somewhere.

What’s special about the first command? We’ve carefully set the system up to start all the tasks sending commands at exactly the same time (as near as the scheduler will allow, anyway) – they may well get out of sync after the first command, as different responses take slightly different lengths of time to come back, but the first commands should be sent pretty much simultaneously.

At this point, I broke out Wireshark – an excellent network protocol analyzer, allowing me to see individual TCP packets. Aside from anything else, this could help to rule out client-side issues: if the TCP packets I sent looked correct, the code sending them was irrelevant.

Sure enough, I saw:

  • Two connections being established
  • The same 5 bytes (the VISCA command: 81 09 04 00 ff) being sent on each connection in two packets about 20 microseconds apart.
  • A 4 byte response (a VISCA error: 90 60 02 ff) being sent down the first connection 10ms later
  • No data being sent down the second connection

That explained the “error on one, timeout on the other” behaviour I’d observed at the client side.

Test the other camera

Now that I could reliably reproduce the bug, I wondered – what does the Minrray camera do?

I expected it to have the same problem. I’m aware that there are a number of makes of camera that use very similar hardware – PTZOptics, Minrray, SMTAV, Zowietek for example. While they all have slightly different firmware and user interfaces, I expected that the core functionality of the camera would be common code.

I was wrong I have yet to reproduce the issue with the Minrray camera – even throwing 20 tasks at it, with only a millisecond delay between commands, I’ve seen no errors.

Report the bug

So, it looks like this may be a bug in the PTZOptics firmware. At first I was reluctant to go to the trouble of reporting the bug, given that it’s a real edge case. (I suspect relatively few people using these cameras connect more than one TCP stream to them over VISCA.) However, I received encouragement on Twitter, and I’ve had really good support experiences with PTZOptics, so I thought it worth having a go.

While my stress test app is much simpler than AYS, it’s definitely not a minimal example to demonstrate the problem. It’s 100 lines long and requires a separate library. Fortunately, all of the code for recording different kinds of failures, and starting tasks to loop for a while, etc – that’s all unnecessary when trying to just demonstrate the problem. Likewise I don’t need to worry about queuing commands on a single connection if I’m only sending a single command down each; I don’t need any of the abstraction that my library contains.

Instead, I boiled it down to this – 30 lines (of which several are comments or whitespace) that just sends two commands “nearly simultaneously” on separate connections and shows the response returned on the the first connection.

using System.Net.Sockets;

string host = "192.168.1.45";
int port = 5678;

byte[] getPowerCommand = { 0x81, 0x09, 0x04, 0x00, 0xff };

// Create two clients - these will connect immediately
var client1 = new TcpClient(host, port);
client1.NoDelay = true;
var stream1 = client1.GetStream();

var client2 = new TcpClient(host, port);
client2.NoDelay = true;
var stream2 = client2.GetStream();

// Write the "get power" command to both sockets as close to simultaneously as we can
stream1.Write(getPowerCommand);
stream2.Write(getPowerCommand);

// Read the response from the first client
var buffer = new byte[10];
int bytesRead = stream1.Read(buffer);
// Print it out in hex.
// This is an error: 90-60-02-FF
// This is success: 90-50-02-FF
Console.WriteLine(BitConverter.ToString(buffer[..bytesRead]));

// Note: this sample doesn't read from stream2, but basically when the bug strikes,
// there's nothing to read: the camera doesn't respond.

Simple but effectively – it reliably reproduces the error on the PTZOptics camera, and shows no problems on the Minrray.

I included that with the bug report, and have received a response from PTZOptics already, saying they’re looking into it. I don’t expect a new firmware version with a fix in any time soon – but I hope it might be in their next firmware update… and at least now I can test that easily.

Conclusion

This was classic diagnostic work:

  • Go from complex code to simple code
  • Try lots of configurations to try to make sense of random-looking data
  • Follow hunches and get more information
  • Use all the tools at your disposal (Wireshark in this case) to isolate the problem as far as possible
  • Once you understand the problem (even if you can’t fix it), write code designed specifically to reproduce it simply

Hope you found this as interesting as I did! Next (this weekend, hopefully): displaying the video output of these cameras on a Stream Deck…

Travis logs and .NET Core console output

This is a blog post rather than a bug report, partly because I really don’t know what’s at fault. Others with more knowledge of how the console works in .NET Core, or exactly what the Travis log does, might be able to dig deeper.

TL;DR: If you’re running jobs using .NET Core 3.1 on Travis and you care about the console output, you might want to set the TERM environment variable to avoid information being swallowed.

Much of my time is spent in the Google Cloud Libraries for .NET repository. That single repository hosts a lot of libraries, and many of the pull requests are from autogenerated code where the impact on the public API surface may not be immediately obvious. (It would be easy to miss one breaking change within dozens of comment changes, for example.) Our Travis build includes a job to work out the public API changes, which is fantastically useful. (Example)

When we updated our .NET Core SDK to 3.1 – or at least around that time; it may have been coincidence – we noticed that some of the log lines in our Travis jobs seemed to be missing. They were actually missing from all the jobs, but it was particularly noticeable for that “change detection” job because the output can often be small, but should always contain a “Diff level” line. It’s really obvious when that line is missing.

I spent rather longer trying to diagnose what was going wrong than I should have done. A colleague noted that clicking on “Raw log” showed that we were getting all the output – it’s just that Travis was swallowing some of it, due to control characters being emitted. This blog post is a distillation of what I learned when trying to work out what was going on.

A simple set of Travis jobs

In my DemoCode repository I’ve created a Travis setup for the sake of this post.

Here are the various files involved:

.travis.yml:

dist: xenial  

language: csharp  
mono: none  
dotnet: 3.1.301  

jobs:  
  include:  
    - name: "Default terminal, no-op program"  
      script: TravisConsole/run-dotnet.sh 0  

    - name: "Default terminal, write two lines"  
      script: TravisConsole/run-dotnet.sh 2  

    - name: "Mystery terminal, no-op program"  
      env: TERM=mystery  
      script: TravisConsole/run-dotnet.sh 0  

    - name: "Mystery terminal, write two lines"  
      env: TERM=mystery  
      script: TravisConsole/run-dotnet.sh 2  

    - name: "Mystery terminal, write two lines, no logo"  
      env: TERM=mystery DOTNET_NOLOGO=true  
      script: TravisConsole/run-dotnet.sh 2

TravisConsole/run-dotnet.sh:

#!/bin/bash  

set -e  

cd $(readlink -f $(dirname ${BASH_SOURCE}))  

echo "Before dotnet run (first)"  
dotnet run -- $1  
echo "After dotnet run (first)"  

echo "Before dotnet run (second)"  
dotnet run -- $1  
echo "After dotnet run (second)"

TravisConsole/Program.cs:

using System;  

class Program  
{  
    static void Main(string[] args)  
    {  
        int count = int.Parse(args[0]);  
        for (int i = 1; i <= count; i++)  
        {  
             Console.WriteLine($"Line {i}");  
        }  
    }  
}

So each job runs the same .NET Core console application twice with the same command line argument – either 0 (in which case nothing is printed out) or 2 (in which case two it prints out “Line 1” then “Line 2”). The shell script also logs before and after executing the console application. The only other differences are in terms of the environment variables:

  • Some jobs use TERM=mystery instead of the default
  • The final job uses DOTNET_NOLOGO=true

I’ll come back to the final job right at the end – we’ll concentrate on the impact of the TERM environment variable first, as that’s the main point of the post. Next we’ll look at the output of the jobs – in each case showing it in the “pretty” log first, then in the “raw” log. The pretty log has colour, and I haven’t tried to reproduce that. I’ve also only shown the relevant bit – the call to run-dotnet.sh.

You can see all of the output shown here in the Travis UI, of course.

Job 1: Default terminal, no-op program

Pretty log

$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Before dotnet run (second)
The command "TravisConsole/run-dotnet.sh 0" exited with 0.

Note the lack of After dotnet run in each case.

Raw log

[0K$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=After dotnet run (first)
Before dotnet run (second)
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=After dotnet run (second)
travis_time:end:18aa556c:start=1595144448336834755,finish=1595144452475616837,duration=4138782082,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 0" exited with 0.[0m

In the raw log, we can see that After dotnet run is present each time, but with [?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h= before it. Let’s see what happens when our console application actually writes to the console.

Job 2: Default terminal, write two lines

Pretty log

$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Line 2
Before dotnet run (second)
Line 2
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

This time we don’t have After dotnet run – and we don’t have Line 1 either. As expected, they are present in the raw log, but with control characters before them:

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=Line 1
Line 2
[?1h=After dotnet run (first)
Before dotnet run (second)
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=Line 1
Line 2
[?1h=After dotnet run (second)
travis_time:end:00729828:start=1595144445905196926,finish=1595144450121508733,duration=4216311807,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Now let’s try with the TERM environment variable set.

Job 3: Mystery terminal, no-op program

$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
After dotnet run (first)
Before dotnet run (second)
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 0" exited with 0.

That’s more like it! This time the raw log doesn’t contain any characters within the script execution itself. (There are still blank lines in the “logo” part, admittedly. Not sure why, but we’ll get rid of that later anyway.)

[0K$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
After dotnet run (first)
Before dotnet run (second)
After dotnet run (second)
travis_time:end:11222e41:start=1595144449188901003,finish=1595144453242229433,duration=4053328430,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 0" exited with 0.[0m

Let’s just check that it still works with actual output:

Job 4: Mystery terminal, write two lines

Pretty log

4.45s$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

Exactly what we’d expect from inspection. The raw log doesn’t hold any surprises either.

Raw log

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
travis_time:end:0203f787:start=1595144444502091825,finish=1595144448950945977,duration=4448854152,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Job 5: Mystery terminal, write two lines, no logo

While job 4 is almost exactly what we want, it’s still got the annoying “Welcome to .NET Core 3.1!” section. That’s a friendly welcome for users in an interactive context, but pointless for continuous integration. Fortunately it’s now easy to turn off by setting DOTNET_NOLOGO=true. We now have exactly the log we’d want:

Pretty log

$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

Raw log

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
travis_time:end:0bb5a6d4:start=1595144448986411002,finish=1595144453476210113,duration=4489799111,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Conclusion

The use of mystery as the value of the TERM environment variable isn’t special, other than “not being a terminal that either Travis or .NET Core will have any fixed expectations about”. I expect that .NET Core is trying to be clever with its output based on the TERM environment variable, and that Travis isn’t handling the control characters in quite the way that .NET Core expects it to. Which one is right, and which one is wrong? It doesn’t really matter to me, so long as I can fix it.

This does potentially have a cost, of course. Anything which would actually produce prettier output based on the TERM environment variable is being hampered by this change. But so far we haven’t seen any problems. (It certainly isn’t stopping our Travis logs from using colour, for example.)

I discovered the DOTNET_NOLOGO environment variable – introduced in .NET Core 3.1.301, I think – incidentally while researching this problem. It’s not strictly related to the core problem, but it is related to the matter of “making CI logs readable” so I thought I’d include it here.

I was rather surprised not to see complaints about this all over the place. As you can see from the code above, it’s not like I’m doing anything particularly “special” – just writing lines out to the console. Are other developers not having the same problem, or just not noticing the problem? Either way, I hope this post helps either the .NET Core team to dive deeper, find out what’s going on and fix it (talking to the Travis team if appropriate), or at least raise awareness of the issue so that others can apply the same workaround.

Using “git bash” from AppVeyor

Update: I don’t know whether it was partially due to this blog post or not, but AppVeyor has fixed things so that you don’t (currently, 20th October 2019) need to use the fix in this post. You may want to include it anyway, for the sake of future-proofing.


TL;DR: If your AppVeyor build starts breaking because it’s started using WSL bash, change the path in your YAML file – see the end of the post for an example.

For years now, I’ve used bash scripts for all kinds of automation in Windows projects. The version of bash I use is the one that comes with Git for Windows – I believe its origins include Cygwin, MSYS2, and MinGW-w64. (I don’t know enough about the differences between those projects or which builds on which to say more. Fortunately, I don’t need to.) This version of bash is installed by default AppVeyor, the CI system I typically use for Windows builds, so I don’t need to do anything else.

Importantly, I don’t want to use Windows Subsystem for Linux (WSL) on Windows builds. The point of doing the build is to use the Windows tool chains. I use Travis for doing Linux builds.

On October 11th 2019, my Noda Time AppVeyor build failed with this error:

build/appveyor.sh: line 11: dotnet: command not found

It turns out this is because AppVeyor has started shipping WSL with its Visual Studio 2019 images. The bash from WSL is earlier in the path than the version of bash from git, so that one is used, and everything starts failing.

It took a little while to diagnose this, but the fix is pretty easy – you just need to put git bash earlier in your path. I chose to do this in the “install” part of appveyor.yml:

install:
  # Make sure we get the bash that comes with git, not WSL bash
  - ps: $env:Path = "C:\Program Files\Git\bin;$env:Path"

Using just that change, the build started working again. Hooray!

Hosting ASP.NET Core behind https in Google Kubernetes Engine

Side-note: this may be one of the clumsiest titles I’ve ever written for a blog post. But it does what it says on the tin. Oh, and the space after “ASP” in “ASP .NET Core” everywhere it to avoid auto-linking. While I could use a different dot or a zero-width non-breaking space to avoid it, I’m not sure I trust WordPress to do the right thing with those…

Background

Over the past few weeks, I’ve moved nodatime.org, csharpindepth.com and jonskeet.uk over to Google Kubernetes Engine. (They all used to be hosted on Azure.)

I’ve done this for a few reasons:

  • As my job is primarily making .NET developers more productive on Google Cloud Platform, it feels natural to run my own code there. I want to see where there are friction points, so I can help fix them.
  • I wanted more redundancy, particularly for nodatime.org; Kubernetes felt a simple way of managing that at a reasonable cost
  • HTTPS certificate management (via Let’s Encrypt) has been a bit painful for me on Azure; I could have automated more, but that would have taken extra time I don’t have. (It may also have improved since I last looked.)

The first of these is the most important, by a long way. But the HTTPS management part – and then the knock-on effects – is what I’m looking at in this blog post.

Basic hosting

Hosting an ASP .NET Core application in Google Kubernetes Engine (GKE from now on) is really simple, at least once you’ve understood the Kubernetes concepts. I have:

In each case, the ASP .NET Core application is built with a vanilla Dockerfile which would not look unusual to anyone who’s hosted ASP .NET Core in Docker anywhere else.

I happen to use Google Cloud Build to build the Docker images, and Google Container Registry to host the images, but neither of those are required. (For csharpindepth.com and jonskeet.uk there are simple triggers in Google Cloud Build to build and deploy on GitHub pushes; for nodatime.org it’s a bit more complicated as the documentation build currently has some Windows dependencies. I have a machine at home that polls GitHub every half hour, and pushes the built application to Google Cloud Build for packaging when necessary.)

So, that gets HTTP hosting sorted. I dare say there are some aspects I’ve not configured as well as I could have done, but it was reasonably straightforward to get going.

HTTPS with Google-managed certificates

With HTTP working, it’s time to shoot for HTTPS. It’s important to note that the apps I’m talking about are all hobby projects, not commercial ones – I’m already paying for hosting, so I don’t want to have to pay for SSL certificates as well. Enter Let’s Encrypt, of course.

A while ago I used Let’s Encrypt to set up HTTPS on Azure, and while it was free and I didn’t have to write any code, it wasn’t exactly painless. I followed two guides at the same time, because neither of them exactly matched the Azure portal I was looking at. There were lots of bits of information to grab from various different bits of the portal, and it took a couple of attempts to get right… but I got there. I also set up a web job to renew the certificates, but didn’t go through the hoops required to run those web jobs periodically. (There were instructions, but it looked like they’d take a while to work through compared with just manually running the web job every couple of months or so. I decided to take the pragmatic approach, knowing that I was expecting to move to GKE anyway. If Azure had been the expected permanent home for the apps, I’d have gone through the steps and I’m sure they’d have worked fine.) I don’t know which guide I worked through at the time, but if I were starting today I’d probably try Scott Hanselman’s guide.

So, what can I do on Google Cloud Platform instead? I decided to terminate the SSL connection at the load balancer, using Google-managed certificates. To be really clear, these are currently in beta, but have worked well for me so far. Terminating the SSL connection at the load balancer means that the load balancer forwards the request to the Kubernetes service as an HTTP request, not HTTPS. The ASP .NET Core app itself only exposes an HTTP port, so it doesn’t need to know any details of certificates.

The steps to achieve this are simple, assuming you have the Google Cloud SDK (gcloud) installed already:

  • Create the certificate, e.g.
    gcloud beta compute ssl-certificates create nodatime-org --domains nodatime.org
  • Attach the certificate to the load balancer, via the Kubernetes ingress in my case, with an annotation in the ingress metadata:
    ingress.gcp.kubernetes.io/pre-shared-cert: "nodatime-org"
  • Apply the modifications to the ingress:
    kubectl apply -f ingress.yaml
  • Wait for the certificate to become valid (the provisioning procedure takes a little while, and I’ve seen some interesting errors while that’s taking place)
  • Enjoy HTTPS support, with auto-renewing certificates!

There are only two downsides to this that I’ve experienced so far:

  • Currently each certificate can only be associated with a single domain. For example, I have different certificates for nodatime.org, http://www.nodatime.org and test.nodatime.org. (More about the last of these later.) This is a minor annoyance, but the ingress supports multiple pre-shared certificates, so it has no practical implications for me.
  • I had to accept some downtime on HTTPS when transferring from Azure to GKE, while the certificate was provisioning after I’d transferred the DNS entry. This was a one-time issue of course, and one that wouldn’t affect most users.

Beyond the basics

At this point I had working HTTPS URLs – but any visitor using HTTP would stay that way. (At the time of writing this is still true for csharpindepth.com and jonskeet.uk.) Obviously I’d like to encourage secure browsing, so I’d like to use the two pieces of functionality provided by ASP .NET Core:

  • Redirection of HTTP requests via app.UseHttpsRedirection()
  • HSTS support via app.UseHsts()

I should note here that the Microsoft documentation was fabulously useful throughout. It didn’t quite get me all the way, but it was really close.

Now, I could have just added those calls into the code and deployed straight to production. Local testing would have worked – it would have redirected from localhost:5000 on HTTP to localhost:5001 on HTTPS with no problems. It would also have failed massively for reasons we’ll look at in a minute. Just for a change, I happened to do the right thing…

For hosting changes, always use a test deployment first

In Azure, I had a separate AppService I could deploy to, called nodatimetest. It didn’t have a fancy URL, but it worked okay. That’s where I tested Azure-specific changes before deploying to the real AppService. Unfortunately, it wouldn’t have helped in this situation, as it didn’t have a certificate.

Fortunately, creating a new service in Kubernetes, adding it to the ingress, and creating a managed certificate is so easy that I did do this for the new hosting – and I’m so glad I did so. I use a small script to publish the local ASP .NET Core build to Google Cloud Build which does the Docker packaging, pushes it to Google Container Registry and updates the Kubernetes deployment. As part of that script, I add a small text file containing the current timestamp so I can check that I’m really looking at the deployment I expect. It takes just under two minutes to build, push, package, deploy – not a tight loop you’d want for every day development, but pretty good for the kind of change that can’t be tested locally.

So, I made the changes to use HTTPS redirection and HSTS, deployed, and… there was no obvious change.

Issue 1: No HTTPS port to redirect to

Remember how the ASP .NET Core app in Kubernetes is only listening on HTTP? That means it doesn’t know which port to redirect users to for HTTPS. Oops. While I guess it would be reasonable to guess 443 if it didn’t know any better, the default of “don’t redirect if you haven’t been told a port” means that your application doesn’t stop working if you get things wrong – it just doesn’t redirect.

This is easily fixed in ConfigureServices:

services.AddHttpsRedirection(options => options.HttpsPort = 443);

… but I’ve added conditional code so it doesn’t do that in development environment, as otherwise it would try to redirect from localhost:5000 to localhost:443, which wouldn’t work. This is a bit hacky at the moment, which is a common theme – I want to clean up all the configuration at some point quite soon (moving things into appsettings.json as far as possible) but it’s just about hanging together for now.

So, make the change, redeploy to test, and… observed infinite redirection in the browser. What?

Issue 2: Forwarding proxied headers

Remember again how the ASP .NET Core app is only listening on HTTP? We want it to behave differently depending on whether the end user made a request to the load balancer on HTTP or HTTPS. That means using headers forwarded from the proxy (in our case the load balancer) to determine the original request scheme. Fortunately, again there’s documentation on hand for this.

There are two parts to configuring this:

  • Configuring the ForwardedHeadersOptions in ConfigureServices
  • Calling app.UseForwardedHeaders() in Configure

(At least, that’s the way that’s documented. I’m sure there are myriad alternatives, but my experience level of ASP .NET Core is such that I’m still in “follow the examples verbatim, changing as little as possible” at the moment.)

I won’t go into the gory details of exactly how many times I messed up the forwarded headers options, but I will say:

  • The example which just changes options.ForwardedHeaders is probably fine if your proxy server is local to the application, but otherwise you will need do to extra work
  • The troubleshooting part of the documentation is spectacularly useful
  • There are warnings logged if you get things wrong, and those logs will help you – but they’re at a debug log level, so you may need to update your logging settings. (I only realized this after I’d fixed the problem, partly thanks to Twitter.)

Lesson to learn: when debugging a problem, turn on debugging logs. Who’d have thought?

Configuring this properly is an area where you really need to understand your deployment and how a request reaches you. In my case, the steps are:

  • The user’s HTTPS request is terminated by the load balancer
  • The load balancer makes a request to the Kubernetes service
  • The Kubernetes service makes a request to the application running on one of the suitable nodes

This leads to a relatively complex configuration, as there are two networks to trust (Google Cloud load balancers, and my internal Kubernetes network) and we need to allow two “hops” of proxying. So my configuration code looks like this:

services.Configure<ForwardedHeadersOptions>(options =>
{
    options.KnownNetworks.Clear();
    // Google Cloud Platform load balancers
    options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("130.211.0.0"), 22));
    options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("35.191.0.0"), 16));
    // GKE service which proxies the request as well.
    options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("10.0.0.0"), 8));
    options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
    options.ForwardLimit = 2;
});

(The call to KnownNetworks.Clear() probably isn’t necessary. The default is to include the loopback, which is safe enough to leave in the list.)

Okay, deploy that to the test environment. Everything will work now, right? Well, sort of…

Issue 3: make sure health checks are healthy!

As it happens, when I’d finished fixing issue 2, I needed to help at a birthday party for a family we’re friends with. Still, I went to the party happily knowing everything was fine.

I then came home and found the test deployment was broken. Really broken. “502 Bad Gateway” broken. For both HTTP and HTTPS. This is not good.

I tried adding more logging, but it looked like none of my requests were getting through to the application. I could see in the logs (thank you, Stackdriver!) that requests were being made, always to just “/” on HTTP. They were all being redirected to HTTPS via a 307, as I’d expect.

This confused me for a while. I honestly can’t remember what gave me the lightbulb moment of “Ah, these are load balancer health checks, and it thinks they’re failing!” but I checked with the load balancer in the Google Cloud Console and sure enough, I had multiple working backends, and one broken one – my test backend. The reason I hadn’t seen this before was that I’d only checked the test deployment for a few minutes – not long enough for the load balancer to deem the backend unhealthy.

I was stuck at this point for a little while. I considered reconfiguring the load balancer to make the health check over HTTPS, but I don’t think that could work as the app isn’t serving HTTPS itself – I’d need to persuade it to make the request as if it were a user-generated HTTPS request, with appropriate X-Forwarded-Proto etc headers. However, I saw that I could change which URL the load balancer would check. So how about we add a /healthz URL that would be served directly without being redirected? (The “z” at the end is a bit of Googler heritage. Just /health would be fine too, of course.)

I started thinking about adding custom inline middleware to do this, but fortunately didn’t get too far before realizing that ASP .NET Core provides health checking already… so all I needed to do was add the health check middleware before the HTTPS redirection middleware, and all would be well.

So in ConfigureServices, I added a no-op health check service:

services.AddHealthChecks();

And in Configure I added the middleware at an appropriate spot:

app.UseHealthChecks("/healthz");

After reconfiguring the health check on the load balancer, I could see /healthz requests coming in and receiving 200 (OK) responses… and the load balancer was then happy to use the backend again. Hooray!

After giving the test service long enough to fail, I deployed to production, changed the load balancer health check URL, and all was well. I did the two parts of this quickly enough so that it never failed – a safer approach would have been to add the health check handler but without the HTTPS redirection first, deploy that, change the health check URL, then turn on HTTPS.

But the end result is, all is working! Hooray!

Conclusion

Moving the service in the first place has been a long process, mostly due to a lack of time to spend on it, but the HTTPS redirection has been its own interesting bit of simultaneous pleasure and frustration. I’ve learned a number of lessons along the way:

  • The combination of Google Kubernetes Engine, Google Cloud Load Balancing, Google Cloud Build and Google Container registry is pretty sweet.
  • Managed SSL certificates are wonderfully easy to use, even if there is a bit of a worrying delay while provisioning
  • It’s really, really important to be able to test deployment changes (such as HTTPS redirection) in an environment which is very similar to production, but which no-one is depending on. (Obviously if you have a site which few people care about anyway, there’s less risk. But as it’s easy to set up a test deployment on GKE, why not do it anyway?)
  • HTTPS redirection caused me three headaches, all predictable:
    • ASP .NET Core needs to know the HTTPS port to redirect to.
    • You need to configure forwarded headers really carefully, and know your deployment model thoroughly .
    • Be aware of health checks! Make sure you leave a test deployment “live” for long enough for the health checks to mark it as unhealthy if you’ve done something wrong, before you deploy to production.
  • When debugging, turn on debug logging. Sounds obvious in retrospect, doesn’t it? (Don’t start trying to copy middleware source code into your own application so you can add logging, rather than using the logging already there…)

I also have some future work to do:

  • There’s a particular URL (http://nodatime.org/tzdb/latest.txt) which is polled by applications in order to spot time zone information changes. That’s the bulk of the traffic to the site. It currently redirects to HTTPS along with everything else, which leads to the total traffic being nearly double what it was before, for no real benefit. I’ve encouraged app authors to use HTTPS instead, but I’ve also filed a feature request against myself to consider serving that particular URL without the redirect. It looks like that’s non-trivial though.
  • I have a bunch of hard-coded information which should really be in appsettings.json. I want to move all of that, but I need to learn more about the best way of doing it first.

All in all, this has been a very positive experience – I hope the details above are useful to anyone else hosting ASP .NET Core apps in Google Kubernetes Engine.

Diagnosing a Linux-only unit test failure

This is an odd one. I’m currently working on Cloud Firestore support for C#, and I’ve introduced a GeoPoint struct to represent a latitude/longitude pair, each being represented by a double. It implements IEquatable and overloads == and != in the obvious way. So far, so good, and I have working tests which have passed on CI.

I’ve added some more types that implement equality, so I thought I’d extract out common code into an EqualityTester class that can handle both the Equals methods and the operators (the latter via reflection).

All went well on my local machine, but on CI one environment failed. We use both AppVeyor and Travis in CI, and only Travis was failing. There were two failures – one testing the equality operators, and one as part of a broader serialization/deserialization test. Odd. Time to go through the normal diagnostic process…

Step 1: Check there’s really a problem

The nature of the failing tests don’t feel like an intermittent problem, but it’s always good to get more of an idea. I forced Travis to rebuild the pull request on Travis a couple of times: yes, it failed the same two tests each time.

Step 2: Reproduce locally

Tests that only fail in CI are really annoying. I’ve had this a couple of times, and it’s always taken hours to sort out, because the “try something, get results, think about what they mean” cycle is so slow.

Fortunately, I have a Linux box at home – so I fetched the pull request and ran the tests there – and they failed. Hooray! It’s not quite as nice as being able to reproduce it on my Windows laptop, but it’s much better than being isolated to CI.

Step 3: Remove irrelevant test code

There are two parts to the code that’s failing: the unit tests, and the production code. Ideally, I’d like to get rid of both, to create a truly minimal example. I usually try to take out one project at a time, and I could have gone either way – but this time I decided to remove the testing first.

So in the same repo, I created a new netcoreapp1.0 console application, with a project reference to the production code. I moved the operator part of my equality tester code into Program.cs, simplified it slightly, introduced by own Assert class with True and False methods that printed out the result, and I was ready to run:

using System;
using System.Reflection;
using Google.Cloud.Firestore.Data;

class Program
{
   static void Main(string[] args)
   {
       var gp1 = new GeoPoint(1.5, 2.5);
       var gp2 = new GeoPoint(1.5, 2.5);
       AssertEqualityOperators(gp1, gp2);
   }

   internal static void AssertEqualityOperators<T>(T control, T equal)
   {
       var typeInfo = typeof(T).GetTypeInfo();
       var equalityMethodInfo = typeInfo.GetMethod(
           "op_Equality", new[] { typeof(T), typeof(T) });
       Func<T, T, bool> equality = 
           (lhs, rhs) => (bool) equalityMethodInfo.Invoke(null, new object[] { lhs, rhs });

       Assert.True(equality(control, equal));
   }
}

class Assert
{
   public static void True(bool actual)
       => Console.WriteLine($"Expected True; was {actual}");

   public static void False(bool actual)
       => Console.WriteLine($"Expected False; was {actual}");
}

Sure enough, the output shows the problem:

Expected True; was False

Great. Now let’s remove the other dependency…

Step 4: Remove production code dependency

Rather than copy the whole of GeoPoint into my test project, I just created a small Test struct which had similar equality code.

Dead-end: a single double value

I made a mistake here – I changed too much in one go. I reduced my Test struct to just one double field:

using System;
using System.Reflection;

class Program
{
   static void Main(string[] args)
   {
       Test t1 = new Test(1.5);
       Test t2 = new Test(1.5);

       var equalityMethodInfo = typeof(Test)
           .GetMethod("op_Equality", new[] { typeof(Test), typeof(Test) });
       Func<Test, Test, bool> equality =
           (lhs, rhs) => (bool) equalityMethodInfo.Invoke(null, new object[] { lhs, rhs });
       Console.WriteLine(t1 == t2);
       Console.WriteLine(equality(t1, t2));
       Console.WriteLine(equalityMethodInfo.Invoke(null, new object[] { t1, t2 }));
   }
}

struct Test : IEquatable<Test>
{
    private readonly double value;

    public Test(double value)
    {
        this.value = value;
    }

    public static bool operator==(Test lhs, Test rhs) =>
        lhs.value == rhs.value;

    public static bool operator!=(Test lhs, Test rhs) => !(lhs == rhs);

    public override bool Equals(object obj) =>
        obj is Test t && Equals(t);

    public bool Equals(Test other) => this == other;

    public override int GetHashCode() => value.GetHashCode();
}

That prints “True” three times – which is what we’d like it to do eventually, but it means it’s not reproducing the problem we want it do.

Back to reproducing it: two double values

Taking one step back towards the previous code, I went back to two double values instead of just one, expanding the Test struct like this:

struct Test : IEquatable<Test>
{
    private readonly double x;
    private readonly double y;

    public Test(double x, double y)
    {
        this.x = x;
        this.y = y;
    }

    public static bool operator==(Test lhs, Test rhs) =>
        lhs.x == rhs.x && lhs.y == rhs.y;

    public static bool operator!=(Test lhs, Test rhs) => !(lhs == rhs);

    public override bool Equals(object obj) =>
        obj is Test t && Equals(t);

    public bool Equals(Test other) => this == other;

    public override int GetHashCode() => x.GetHashCode() + y.GetHashCode();
}

Change both constructor calls to Test(1.5, 2.5) and bingo: it prints True, False, False.

Step 5: Explore the scope of the problem

I tried the exact same code on Windows, and (unsurprisingly, given that my earlier unit tests passed) it didn’t reproduce the problem. But sticking with Linux, we can still try multiple frameworks and multiple build configurations.

I wouldn’t like to say what prompted me to try different frameworks – intuition can be hard to explain, unfortunately. But let’s change the project file a bit:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFrameworks>netcoreapp1.0;netcoreapp1.1;netcoreapp2.0</TargetFrameworks>
  </PropertyGroup>
</Project>

And then run it in several different configurations (remember that three “True” lines means “everything is working”, and “True”, “False”, “False” means “this is the problem we’re investigating”):

$ dotnet run -c Debug -f netcoreapp1.0
True
False
False
$ dotnet run -c Release -f netcoreapp1.0
True
False
False
$ dotnet run -c Debug -f netcoreapp1.1
True
True
True
$ dotnet run -c Release -f netcoreapp1.1
True
True
True
$ dotnet run -c Debug -f netcoreapp2.0
True
True
True
$ dotnet run -c Release -f netcoreapp2.0
True
True
True

So, it fails under .NET Core 1.0, but works under .NET Core 1.1 and .NET Core 2.0. That certainly makes it sound like a .NET Core 1.0 bug, but that doesn’t mean I want to give up at this point. It’s entirely possible that I have users who are using .NET Core 1.0, so I want my code to work under it.

Now, I’ve only given major/minor version numbers – and that’s partly because I don’t have a very good handle on the different parts of .NET Core on Linux and how to find the version numbers. The 1.0 packages I’ve got installed are listed as:

  • dotnet-dev-1.0.4/xenial,now 1.0.4-1 amd64 [installed]
  • dotnet-hostfxr-1.0.1/xenial,now 1.0.1-1 amd64 [installed]
  • dotnet-sharedframework-microsoft.netcore.app-1.0.5/xenial,now 1.0.5-1 amd64 [installed,automatic]

No dotnet-runtime package, which I believe is part of 2.0 instead.

Step 6: Look into the failed comparison

This part is really simple – just add more diagnostics to the == method:

public static bool operator==(Test lhs, Test rhs)
{
    Console.WriteLine($"lhs=({lhs.x}, {lhs.y}); rhs=({rhs.x}, {rhs.y})");
    Console.WriteLine($"{lhs.x == rhs.x}, {lhs.y == rhs.y}");
    return lhs.x == rhs.x && lhs.y == rhs.y;
}

Now things get really interesting in the output:

lhs=(1.5, 2.5); rhs=(1.5, 2.5)
True, True
True
lhs=(1.5, 1.85492638478664E-316); rhs=(1.5, 6.95251497332956E-310)
True, False
False
lhs=(1.5, 6.9525149733517E-310); rhs=(1.5, 6.9525149733351E-310)
True, False
False

The first three lines are as expected – and that’s the working comparison. But each of the three-line-segments for the second and third comparisons show the first field (x) being fine, but the second being some tiny double value, and inconsistent at that.

Step 7: Try (and fail) to simplify

Maybe boxing is to blame? How about we try a really simple version of boxing – just override ToString(), box a value and see what comes up. The change in Test is simply:

public override string ToString() => $"{x}, {y}";

And then to test it:

object x = new Test(1.5, 2.5);
Console.WriteLine(x);

This prints 1.5, 2.5 – so it appears to be boxed correctly.

Step 8: Try simplifying a bit less

In our failing program, there are two potential complications at play: we’re boxing the value, and we’re calling a method via reflection. If just boxing isn’t a problem, maybe reflection is?

I decided to try two things at once: an instance method and a static method accepting a Test parameter (like the operator). Now we’re not doing any comparisons, we can get rid of a lot of code. Here’s the new complete code:

using System;
using System.Reflection;

class Program
{
   static void Main(string[] args)
   {
       Test t = new Test(1.5, 2.5);
       var foo = typeof(Test).GetMethod("Foo");
       foo.Invoke(t, null);
       var bar = typeof(Test).GetMethod("Bar");
       bar.Invoke(null, new object[] { t });
   }
}

struct Test
{
    private readonly double x;
    private readonly double y;

    public Test(double x, double y)
    {
        this.x = x;
        this.y = y;
    }

    public void Foo() => Console.WriteLine($"{x}, {y}");
    public static void Bar(Test t) => Console.WriteLine($"{t.x}, {t.y}");
}

And now the results:

1.5, 2.5
1.5, 6.95260190884551E-310

Interesting! At this point, there are lots of things that I could check:

  • Does it affect all parameters?
  • Does it happen for other field types? Other primitives? References?
  • Can I provoke it in any other circumstances other than via
    reflection?

Most of these can probably be answered by finding an existing issue that was fixed in netcore1.1… which is reasonably feasible given the Open Source nature of .NET Core.

That can all wait though (it’s a Sunday evening for me, after all…) – let’s see if we can fix the problem first.

Step 9: Fix it with a stroke of luck

Given that this feels like a memory management issue, let’s try something I’ve basically not used before: explicit struct layout.

A struct containing two double fields should be pretty easy to lay out: just fields at offsets 0 and 8, right? Admittedly it also sounds like it should be easy for the CLR to get right itself… but at least laying it out explicitly is low-risk.

using System.Runtime.InteropServices;

[StructLayout(LayoutKind.Explicit)]
struct Test
{
    [FieldOffset(0)]
    private readonly double x;
    [FieldOffset(8)]
    private readonly double y;

    ...
}

Normally, things don’t get fixed on first attempt, but in this case… it just worked!

Backtracking from the simplification, to the standalone comparison version, to fixing GeoPoint (which involved not using automatically implemented properties any more, as I want to apply the attribute to the field, not the property – boo!) and checking it works locally, to adding a commit to the PR and seeing it pass on Travis. Yay!

Next steps and points to note

My next step is definitely to try to find some history of this issue. It’s a surprisingly big problem to affect the GA .NET Core 1.0 runtime, so I expect there’s some record of it somewhere. I want to understand better what was wrong, whether my fix is appropriate, and whether I need to apply it elsewhere.

Update: this is a still an issue in the 1.0.7 runtime, and apparently even in the 1.0.8-servicing daily builds, so I’ve filed an issue and look forward to seeing what happens there.

Some interesting aspects:

  • If you initially see a problem in CI, your top priority is to reproduce it locally.
  • As ever, remove as much other code as you can – your production code, your test code, all dependencies.
  • It’s handy to have multiple execution environments available. If I hadn’t had a local Linux box, this would have been a nightmare.
  • Try different configurations and different runtimes/frameworks.
  • Sometimes the bug really is in the runtime or BCL. It’s pretty rare, but it does happen.

Diagnosing a VS-only build problem

I do most of my work in the google-cloud-dotnet github repo (That’s the repo for the Google Cloud Client Libraries for .NET, just to get a quick bit of marketing in there.) We try to keep our build and test dependencies up to date, so I recently updated to the latest versions of Microsoft.NET.Test.Sdk and xUnit.

Leaving aside problems identified by the xUnit analyzer which is now bundled in the xUnit package (some of which were useful, and some of which weren’t), this change caused another problem: while building from the command line worked fine, building some of the many solutions from Visual Studio (15.3.3) generated the following error:

Error MSB4018 The “GenerateBindingRedirects” task failed unexpectedly.
System.IO.PathTooLongException: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
at System.IO.PathHelper.GetFullPathName()
at System.IO.Path.LegacyNormalizePath(…)
at System.IO.Path.NormalizePath(…)
at System.IO.Path.NormalizePath(…)

(more stack trace here)

The output window shows a file related to the error:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2099,5)

So, what’s going on, and how do we fix it?

Step 1: Check that this change is responsible

Git makes life really easy here. For peace of mind – particularly for this kind of issue – I tend to close Visual Studio when switching branches, so:

  • Find a solution that fails (e.g. Google.Cloud.Diagnostics.AspNetCore)
  • Close VS
  • git checkout master
  • Rebuild solution in VS – looks fine
  • Close VS
  • git checkout branch with change
  • Rebuild in VS – observe error
  • git checkout master
  • Rebuild solution in VS – looks fine
  • Close VS
  • git checkout branch with change
  • Rebuild in VS – observe error

Yes, I did everything twice, just to make sure it wasn’t an anomaly.

So yes, it’s definitely a problem.

Step 2: Moan on Twitter

This isn’t the most scientific diagnostic tool in the world, but posting on Twitter about the problem did at least reassure me that I wasn’t alone.

Step 3: Narrow down the cause

Even though I observed the problem in Google.Cloud.Diagnostics.AspNetCore, the full error message in VS referred to Google.Cloud.Diagnostics.Common.IntegrationTests. That’s part of the Google.Cloud.Diagnostics.Common solution – the AspNetCore projects depend on the Common projects.

Try step 1 again with Google.Cloud.Diagnostics.Common (just one pass and one failure this time) – yes, it’s still a problem there. That’s useful.

Step 4: Try a workaround

All my source is under c:\Users\Jon\Test\Projects. The “Test” part is a bit of an anomaly, and next time I get a new machine, I’ll probably work without it, but there’s no need to change now. The Projects directory is actually a junction (symlink) to the c:\Users\Jon\Documents\Visual Studio 2015\Projects directory. That’s quite a long path to start with… let’s see if getting rid of the symlink helps.

  • Delete symlink
  • Move Projects file from under Documents to directly under Test
  • Try to build: same failure

Okay, it looks like we’ll have to be a bit more subtle.

Step 5: Play spot the difference

Given that we have code which builds on the command line in both the working and failing situations, we can reasonably easily see any differences in generated files.

  • On the command line, go into the Google.Cloud.Diagnostics.Common.IntegrationTests directory
  • Delete bin and obj
  • git checkout master
  • Run dotnet build
  • Copy the resulting bin and obj directories to a “working” directory
  • git checkout branch with change
  • Delete bin and obj
  • Run dotnet build
  • Copy the resulting bin and obj directories to a “broken” directory
  • Run kdiff3 against the working and broken directories

There are lots of differences between the directories, as I’d expect, but given that this is about binding redirects, it’s reaosnable to use experience and start scanning for filenames ending with .config.

Sure enough, in the “broken” directory, under obj\net452, there was a file called Google.Cloud.Diagnostics.Common.IntegrationTests.csproj.Google.Cloud.Diagnostics.Common.IntegrationTests.dll.config. That’s not a copy and paste error – it really is a 115-character-long filename, even leaving out any directory parts.

In the file system, the full path is: c:\users\jon\Test\Projects/google-cloud-dotnet\apis\Google.Cloud.Diagnostics.Common\Google.Cloud.Diagnostics.Common.IntegrationTests\obj\Debug\net452\Google.Cloud.Diagnostics.Common.IntegrationTests.csproj.Google.Cloud.Diagnostics.Common.IntegrationTests.dll.config – that’s 266 characters.

Step 6: Try being slightly cheeky: very temporary workaround

Okay, so moving away from the “Documents\Visual Studio 2015” directory didn’t help much, but given that we’re just on the limit, let’s try just renaming “google-cloud-dotnet” to “short” (very temporarily).

Try opening it in Visual Studio – wahoo, it works :) The .config file is generated by Visual Studio correctly.

Step 7: Work out who to blame

So, where’s the bug?

  • It’s a shame that Visual Studio 2017 doesn’t support long filenames, even though the dotnet CLI does
  • Surely we don’t need such a long filename anyway
  • Do I need the project name to be so long?

Looking back to the very first description, let’s have a look at the msbuild file that’s mentioned: C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets

That has:

<GenerateBindingRedirects
  AppConfigFile="@(AppConfigWithTargetPath)"
  TargetName="$(TargetFileName).config"
  OutputAppConfigFile="$(_GenerateBindingRedirectsIntermediateAppConfig)"
  SuggestedRedirects="@(SuggestedBindingRedirects)"
>

So it looks like the problem is _GenerateBindingRedirectsIntermediateAppConfig, which is defined in a property group elsewhere as:

<_GenerateBindingRedirectsIntermediateAppConfig>$(IntermediateOutputPath)$(MSBuildProjectFile).$(TargetFileName).config</_GenerateBindingRedirectsIntermediateAppConfig>

That certainly looks like it’s responsible for the choice of file.

A quick search for _GenerateBindingRedirectsIntermediateAppConfig shows that I’m not the first person to have run into this – PathTooLongException with _GenerateBindingRedirectsIntermediateAppConfig describes exactly the pain I’ve been going through, and even helpfully references MSBuild should handle paths longer than MAX_PATH.

I’ll add my little bit of feedback to the former issue as soon as I’ve published this post (so I can link to it).

Step 8: Decide what to do

My options appear to be:

  1. Change the project name or the directory structure
  2. Ask everyone on the team to work from a really short root directory name
  3. Downgrade my dependencies again
  4. Live with Visual Studio not building a few solutions properly – we can still work in Visual Studio, so long as we don’t need a full build in it.

I went with option 4, and published this blog post.

Step 9: Embrace community feedback

Theose were the only options I considered at the time of original writing, partly as I was somewhat scared of this suggestion, being inexperienced in msbuild. However, combining that with a Directory.Build.targets file (which I didn’t know about before being tipped off by Nick Guerrera), it was very easy to implement a repo-wide workaround.

My friend Kirill Osenkov also pointed me at his MSBuild Log Viewer and pointed out that if it failed in Visual Studio, it would probably fail from msbuild on the command line. He was right, and the log viewer pinpointed what it was trying to write, which would have saved time.

Points to note

  • Situations where you have a working case and a broken case are great. They can help you validate your problem (does it go away if I revert to the previous commit?) and find the cause (what’s the difference in output between working and broken?)
  • Temporary approaches to double-check the diagnosis (such as me renaming my source directory to “short”) can be useful – don’t forget to undo them though!
  • If you’ve run into a problem, someone else probably has too
  • The .NET community can be really amazing – I was really impressed by the response here
  • Tools are often available – in this case, msbuildlog.com would have saved me quite a bit of time.