New book: Software Mistakes and Tradeoffs

I’m delighted to announce that I’ve been hard at work contributing to a new book.

The book is called “Software Mistakes and Tradeoffs: How to make good programming decisions” and the principal author is Tomasz Lelek. The book was Tomasz’s idea, and he’s written the vast majority of the material, but I’ve contributed a chapter on handling date/time information, and another one around versioning (for libraries, network APIs and storage).

The aim of the book isn’t to provide answers: it’s to help you think carefully in your specific context, and ask the right questions. Tomasz and I have both made plenty of mistakes over the course of our careers – or been adjacent to other engineers making those mistakes. The choices that have been mistakes for us might not be a mistake for you – but it’s better to go into those choices with your eyes open to the trade-offs involved, and where they can lead in different situations.

This isn’t a book about a specific technology, although of course it demonstrates the ideas using examples which are specific. Almost all of the examples are in Java, but if you’re not a Java developer that really shouldn’t put you off: the ideas are easily transferrable to other environments. (In particular, if you understand C# it’s very unlikely that Java syntax will faze you.)

We’ve just launched the book into MEAP (Manning Early Access Program), with an estimated publication date of “fall 2021” (which means I really need to get on with polishing up my versioning chapter). The first seven chapters are available in the MEAP right now, which includes my date/time chapter.

What about C# in Depth?

You may be wondering where that leaves C# in Depth. The 4th edition of C# in Depth covers C# up to version 7, with a chapter looking ahead to C# 8 (which wasn’t finalized at the time of publication). That means I’m already two versions behind. So, what am I going to do about that?

The short answer is: nothing just yet. I haven’t started a 5th edition.

The longer answer is: yes, I definitely want to write a new edition at some point. However, I suspect the structure will need to change entirely (from version-based to topic-based) and I expect it to take a long time to write. Additionally, I have an idea around a diagnostics book which has morphed several times, but which I’m still keen on… and if I can get traction for that, it will probably take priority over C# in Depth, at least for a while.

So yes, one day… but probably sufficiently far in the future that it’s not worth asking any more until I announce something.

Playing with an X-Touch Mini controller using C#

Background

As I wrote in my earlier blog post about using OSC to control a Behringer XR16, I’m working on code to make our A/V system at church much easier to work with. From an audio side, I’ve effectively accomplished two goals already:

  • Remove the intimidating hardware mixer with about 150 physical knobs/buttons
  • Allow software to control both audio and visual aspects at the same time (for example, switching from “displaying the preacher with their microphone on” to “displaying hymn words with all microphones muted”)

I have a third goal, however, to accommodate those who are willing to work the A/V system but would really prefer to hardly touch a computer. The requires bringing hardware back into the system. Having removed a mixer, I want to introduce something a bit like a mixer again, but keeping it much simpler (and still with the ability for software to do most of the heavy lifting). While I haven’t blogged about it (yet), I’m expecting most of the “work” during a service to be performed via a Stream Deck XL. The technical aspects of that aren’t terribly interesting, thanks to the highly competent StreamDeckSharp library. But there are plenty of interesting UI design decisions – some of which may be simple to those who know about UI design, but which I find challenging due to a lack of existing UI skills.

The Stream Deck only provides press buttons though – there’s no form of analog control, which is typically what you want to adjust an audio level. Also, I’m not expecting the Stream Deck to be used in every situation. If someone is hosting a meeting with a speaker who needs a microphone, but the meeting isn’t being shared on Zoom, and there are no slides (or just one slide deck for the whole meeting), then there’s little benefit in going for the full experience. You just want a simple way of controlling the audio system.

For those who are happy using a computer, I’m providing a very simple audio mixer app written in WPF for this – a slider and a checkbox per channel, basically. I’ve been looking for the best way to provide a similar experience for those would prefer to use something physical, but without adding significant complexity or financial cost.

Physical control surfaces

I’ve been looking at all kinds of control surfaces for this purpose, and my previous expectation was that I’d use a Loupedeck Live. I’m currently somewhat blocked by the lack of an SDK for it (which will hopefully go public in the summer) but I’m now not sure I’ll use it in the church building anyway. (I’m sure I’ll find fun uses for it at home though. I don’t regret purchasing one.) My other investigations for control surfaces found the Monogram modular system which looks amazing, but which is extremely expensive. In an ideal world, I would like a control surface which has the following properties, in roughly descending order of priority:

  1. Easy to interact with from software (e.g. an SDK, network protocol, MIDI or similar – with plenty of documentation)
  2. Provides analog, fine-grained control so that levels can be adjusted easily
  3. Provides visual output for the state of the system
  4. Modular (so I can have just the controls we need, to keep it simple and unintimidating) or at least available with “roughly the set of controls we need and not much more”
  5. Has small displays (like the Stream Deck) so channel labels (etc) could be updated dynamically in software

Point 3 is an interesting one, and is where most options fall down. The two physical form factors that are common for adjust levels are rotary knobs, and faders (aka sliders). Faders are frankly a little easier to use than knobs, but both are acceptable. The simple version for both of these assumes that it has complete control over the value being adjusted. A fader’s value is simply the vertical position. Simple knobs often have line or other indications of the current value, and hard stops at the end of the range (i.e. if you turn them to the maximum or minimum value, you can’t physically turn them any further). Likewise, simple buttons used for muting are usually pressed or not, on a toggle basis.

All of these simple controls are inappropriate for the system I want to build, because changes from other parts of the system (e.g. my audio mixer app, or the full service presentation app, or the X-Air application that comes with XR mixers) couldn’t be shown on the physical control surface: anyone looking at the control surface would get the wrong impression of the current state.

However, there are more flexible versions of each control:

  • There are knobs which physically allow you to keep turning them forever, but which have software-controlled rings of lights around them to show the current logical position.
  • There are motorized faders, whose position can be adjusted by software
  • There are buttons that always come back up (like keys on a keyboard) but which have lights to indicate whether they’re logically “on” or “off”.

If I had unlimited budget, I’d probably go with motorized faders (although it’s possible that they’d be a bit disconcerting at first, moving around on their own). They tend to be only available on expensive control surfaces though – often on systems which do far more than I actually want them to, with rather more controls than I want. The X-Touch Compact is probably the closest I’ve found, but it’s overkill in terms of the number of controls, and costs more than I want to spent on this part of the system.

Just to be clear: I have nothing against control surfaces which don’t meet my criteria. What I’m building is absolutely not the typical use case. I’m sure all the products I’ve looked at are highly suitable for their target audiences. I suspect that most people using audio mixers either as a hobby or professionally are tech savvy and don’t mind ignoring controls they don’t happen to need right now. If you’re already using a DAW (Digital Audio Workstation), the hardware complexity we’re talking about is no big deal. But the target audience for my system is very, very different.

Enter the X-Touch Mini

Last week, I found the X-Touch Mini – also by Behringer, but I want to stress that this is pretty much coincidental. (I could use the X-Touch Mini to control non-Behringer mixers, or a non-Behringer controller for the XR16/XR18.) It’s not quite perfect for our needs, but it’s very close.

It consists of the following controls:

  • 8 knobs without hard stops, and with light ring displays. These can also be pressed/released.
  • A top row of 8 unlabeled buttons
  • A bottom row of labeled buttons (MC, rewind, fast forward, loop, stop, play and record)
  • One unmotorized fader
  • Two “layer” buttons

My intention is to use these as follows:

  • Knobs will control the level of each individual channel, with the level being indicated by the light ring
  • Unlabeled buttons will control muting, with the buttons for active (unmuted) channels lit
  • The fader will control the main output volume (which should usually be at 0 dB)

That leaves the following aspects unused:

  • The bottom row of buttons
  • Pressing/releasing knobs
  • The layer buttons

The fact that the fader isn’t motorized is a minor inconvenience, but the fact that it won’t represent the state of the system (unless the fader was the last thing to change it) is relatively insignificant compared with the state of the channels. We tend to tweak individual channels much more than the main volume… and if anyone does set the main volume from the X-Touch Mini, it’s likely that they’ll do so consistently for the whole event, so any “jump” in volume would only happen once.

So, that’s the physical nature of the device… how do we control it?

Standard mode and Mackie Control mode

One of the recurring themes I’ve found with audio equipment is that there are some really useful protocols that are woefully under-documented. That’s often because different physical devices will interpret the protocol in slightly different ways to account for their control layout etc. I completely understand why it’s tricky (and that writing documentation isn’t particularly fun – I’m as guilty as anyone else of putting it off) but it’s still frustrating. This also goes back to me not being a typical user, of course. I suspect that the vast majority of users can plug the X-Touch Mini into their computers, fire up their DAW and get straight to work with it, potentially configuring it within the DAW itself.

Still, between the user manual (which is generally okay) and useful pages scattered around the web (particularly this Stack Overflow answer) I’ve worked things out a lot more. The interface is entirely through MIDI messages (over USB). Fortunately, I already have a fair amount of experience in working with MIDI from C#, via my V-Drum Explorer project. The controller acts as both a MIDI output (for button presses etc) and a MIDI input (to receive light control messages and the like).

The X-Touch Mini has two different modes: “standard” mode, and Mackie Control mode. That’s what the MC on the bottom row of buttons means; that button is used to switch modes while it’s starting up, but can be used for other things once it’s running. The Mackie Control protocol (also known as Mackie Control Universal or MCU) is one of the somewhat-undocumented protocols in the audio world. (It may well be documented exhaustively across the web, but it’s not like there’s one obviously-authoritative source with all the details you might like which comes up with a simple search.)

In standard mode, the X-Touch Mini expects to be primarily in charge of the “display” aspect of things. While you can change the button and knob lights through software, next time you do anything with that control it will reset itself. That’s probably great for simple integrations, but makes it harder to use as a “blank canvas” in the way that I want to. Standard mode is also where the layer buttons have meaning: there are two layers (layer A and layer B), effectively doubling the number of knobs/buttons, so you could handle 16 channels, channels 1-8 on layer A and channels 9-16 on layer B.

In Mackie Control mode, the software controls everything. The hardware doesn’t even keep a notional track of the position of a knob – the messages are things like “knob 1 moved clockwise with speed 5” etc. Very slightly annoyingly, although there are 13 lights in the light ring around each knob, only 11 are accessible within Mackie Control Mode – due to limitations of the protocol, as I understand it. But other than that, it’s pretty much exactly what I want: direct control of everything, without the X-Touch Mini getting in the way by thinking it knows what I want it to do.

I’ve created a library which allows you to use the X-Touch Mini in both modes, in a reasonably straight-forward way. It doesn’t try to abstract away the differences between the two modes, beyond the fact that both allow you to observe button presses, knob presses, and knob turns as events. There’s potentially a little more I could do to push commonality up the stack a bit, but I suspect it would rarely be useful – I’d expect most apps to work in one mode or the other, but not both.

Interfacing with the XR16/XR18

This part was the easy bit. The audio mixer WPF app has a model of “a channel” which allows you to send an update request, and provides information about the channel name, fader position, mute state, and even current audio level. All I had to do was translate MIDI output from the X-Touch Mini into changes to the channel model, and translate property changes on the channel model into changes to the light rings and button lights. The code for this, admittedly without any tests and very few comments, is under 200 lines in total (including using directives etc).

It’s not always easy to imagine what this looks like in reality, so I’ve recorded a short demo video on YouTube. It shows the X-Touch Mini, along with X-Air and the WPF audio mixer app, all synchronized and working together beautifully. (I don’t actually demonstrate the main volume fader on the video, but I promise it works… admittedly the values on the physical fader don’t all align perfectly with the values on the mixer, but they’re not far off… and the important 0 dB level does line up.)

One thing I show in the demo is how channels 3 and 4 form a stereo pair in the mixer. The X-Touch Mini code doesn’t have any configuration telling it that at all, and yet the lights all work as you’d want them to. This is a pleasant quirk of the way that the lighting code is hooked up purely to the information provided by the mixer. When you press a button to unmute a channel, for example, that code sends a request to the mixer, but does not light the button. The light only comes on because the mixer then notifies everything that the channel has been unmuted. When you do anything with channels 3 or 4, the mixer notifies all listening applications about changes to both 3 and 4, and the X-Touch Mini just reacts accordingly to update the light ring or button. It makes things a lot simpler than having to try to keep an independent model of what the X-Touch Mini “thinks” the mixer state is.

I was slightly concerned to start with that this aspect of the design would make it unresponsive when turning a knob: several MIDI events are generated, and if the latency between “send request to mixer” and “mixer notifies apps of change” was longer than the gap between the MIDI events, that would cause problems. Fortunately, that doesn’t seem to be the case – the mixer responds very quickly, before the follow-up MIDI requests from the X-Touch for continued knob turning are sent.

Show me the code!

All the code for this is in my GitHub DemoCode repo, in the XTouchMini directory.

Unless you happen to have an X-Touch Mini, it probably won’t be much use to you… but you may want to have a look at it anyway. I don’t in any way promise that it’s rock-solid, or particularly elegant… but it’s a reasonable start, I think.

That’s all for now… but I’m having so much fun with hardware integration projects that I wouldn’t be surprised to find I’m writing more posts like this over the summer.

OSC mixer control in C#

In some senses, this is a follow on from my post on VISCA camera control in C#. It’s about another piece of hardware I’ve bought for my local church, and which I want to control via software. This time, it’s an audio mixer.

Audio mixers: from hardware controls to software controls

The audio mixer we’ve got in the church building at the moment is a Mackie CFX 12. We’ve had it for a while, and it does the job really well. I have no complaints about its capabilities – but it’s really intimidating for non-techie folks, with about 150 buttons/knobs/faders, most of which never need to be touched (and indeed shouldn’t be touched).

I would like to get to a situation where the church stewards can use something incredibly simple that reflects the semantic change they want (“we’re singing a hymn”, “someone is reading a Bible passage”, “the preacher is starting the sermon” etc) and takes care of adjusting what’s being projected onto the screen, what’s happening with the sound, what the camera is pointing at, and what’s being transmitted via Zoom.

I can’t do that with the Mackie CFX 12 – I can’t control it via software.

Enter the Behringer XR16 – a digital audio mixer. (There are plenty of other options available. This had good reviews, and at least signs of documentation.) Physically, this is just a bunch of inputs and outputs. The only controls on it are a headphone volume knob, and the power switch. Everything else is done via software. The X-Air application can control everything from a desktop, iOS or Android device, which is a good start… but that’s still much too complicated. (Indeed, I find it rather intimidating myself.)

Open Sound Control

Fortunately, the XR16 (along with its siblings, the XR12 and XR18, and the product it was derived from, the X32) implement the Open Sound Control protocol, or OSC. They implement this over UDP, and once you’ve found some documentation, it’s reasonably straightforward. Hat tip at this point to Patrick-Gilles Maillot for not only producing a mass of documentation and code for the X32, but also responding to an email asking whether he had any documentation for the X-Air series (XR-12/16/18)… the document he sent me was invaluable. (Behringer themselves responded to a tech support ticket with a brief but useful document too, which was encouraging.)

OSC consists of packets, each of which has an address such as “/ch/01/mix/on” (the address for muting or unmuting the first input channel) and potentially parameters. For example, to find out whether channel 1 is currently muted, you send a packet consisting of just the address mentioned before. The mixer will respond with a packet with the same address, and a parameter value of 0 if the channel is muted, or 1 if it’s not. If you want to change the value, you send a packet with the parameter. (This is a little like the Roland MIDI protocol for V-Drums – the same command is used to report state as to change state.)

You can also send a packet with an address of “/xremote” to request that for the next 10 seconds, the mixer sends any data changes (e.g. made by other applications, or even the one sending it). Subscribing to volume meters is slightly trickier – there are indexer meter addresses (“/meters/0”, “/meters/1” etc) which mean different things on different devices, and each response has a blob of data with multiple values in. (This is for efficiency: there are many, many meters to monitor, and you wouldn’t want each of them sending a separate packet at 50ms intervals.)

OSC in .NET

The OscCore .NET package provided everything I needed in terms of parsing and formatting OSC packets, so it didn’t take too long to write a prototype experimentation app in WPF.

The screenshot below shows effectively two halves of the UI: one for sending OSC packets manually and logging and packets received, and the other for putting together a crude user interface for more reasonable control. This shows just five inputs on the top, then six aux (mono) outputs and the main stereo output on the bottom.

This is the sort of thing a church steward would need, although the “per aux output” volume control is probably unnecessary – along with the VU meters. I still need to work out exactly what the final application will need (bearing in mind that I’m hoping tweaks will be rare – most of the time the “main” control aspect of the app will do everything), but it’s easier to come up with designs when there’s a working prototype.

OSC mixer app

One interesting aspect of this architecturally is that when a slider is changed in the app, the code currently just sends the command to change the value to the mixer. It doesn’t update the in-memory value… it waits for the mixer to send back a “this value has changed” packet, and that updates the in-memory value (which then updates the position of the slider on the screen). That obviously introduces a bit of lag – but the network and mixer latency is small enough that it isn’t actually noticeable. I’m still not entirely sure it’s the right decision, but it does give me more confidence that the change in value has actually made it to the mixer.

Conclusion

There’s definitely more work to do in terms of design – I’d quite like to move all the Mixer and Channel model code into the “core” library, and I’ll probably do that before creating any “production” applications… but for now, it’s at least good enough to put on GitHub. So it’s available in my democode repo. It’s probably no use at all if you don’t have an XR12/XR16/XR18 (although you could probably tweak it pretty easily for an X18).

But arguably the point of this post isn’t to reach the one or two people who might find the code useful – it’s to try to get across the joy of playing with a hobby project. So if you’ve got a fun project that you haven’t made time for recently, why not dust it off and see what you want to do with it next?

VISCA camera control in C#

During lockdown, I’ve been doing quite a lot of tech work for my local church… mostly acting in a sort of “producer” role for our Zoom services, but also working out how we can enable “hybrid” services when some of us are back in our church buildings, with others still at home. (This is partly a long term plan. I never want to go back to letting down the housebound.)

This has involved sourcing a decent pan/tilt/zoom (PTZ) camera… and then having some fun with it. We’ve ended up using a PTZOptics NDI camera with 30x optical zoom. Now it’s one thing to have a PTZ camera, but then you need to work out what to do with it. There are lots of options on the “how do you broadcast” side of things, which I won’t go into here, but I was interested in the PTZ control part.

Before buying the camera, I knew that PTZOptics cameras exposed an HTTP port which provides a reasonable set of controls, so I was reasonably confident I’d be able to do something. I was also aware of the VISCA protocol and that PTZOptics cameras exposed that over the network as well as the more traditional RS-232 port… but I didn’t have much idea about what the network version of the protocol was.

The manual for the camera is quite detailed, including a complete list of VISCA commands in terms of “these are the bytes you send, and these are the bytes you receive” but without any sort of “envelope” description. It turns out, that’s because there is no envelope when working with VISCA over the network, apparently… you just send the bytes for the command packet (with TCP no-delay enabled of course), and read data until you see an FF that indicates the end of a response packet.

It took me longer to understand this “lack of an envelope” than to actually write the code to use it… once I’d worked out how to send a single command, I was able to write a reasonably complete camera control library quite easily. The code lacks documentation, tests, and decent encapsulation. (I have some ideas about the third of those, which will enable the second, but I need to find time to do the job properly.)

Today I’ve made that code available on GitHub. I’m hoping to refactor it towards decent encapsulation, potentially writing blog posts about that as I go, but until then it might prove useful to others even in its current form. Aside from anything else, it’s proof that I write scrappy code when I’m not focusing on dotting the Is and crossing the Ts, which might help to relieve imposter syndrome in others (while exacerbating it in myself.) I haven’t yet published a package on NuGet, and may never do so, but we’ll see. (It’s easy enough to clone and build yourself though.)

The library comes with a WPF demo app – which is even more scrappily written, without any view models etc. The demo app uses the WebEye WPF RTSP library to show “what the camera sees”. This is really easy to integrate, with one downside that it uses the default FFmpeg buffer size, so there’s a ~2s delay when you move the camera around. That means you wouldn’t want to use this for any kind of production purposes, but that’s not what it’s for :)

Here’s a screenshot of the demo app, focusing on the wind sculpture that Holly bought me as a present a few years ago, and which is the subject of many questions in meetings. (The vertical bar on the left of the sculpture is the door frame of my shed.) As you can see, the controls (top right) are pretty basic. It would be entirely possible to use the library for a more analog approach to panning and tilting, e.g. a rectangle where holding down the mouse button near the corners would move the camera quickly, whereas clicking nearer the middle would move it more slowly.

VISCA demo app

One of the natural questions when implementing a protocol is how portable it is. Does this work with other VISCA cameras? Well, I know it works with the SMTAV camera that I bought for home, but I don’t know beyond that. If you have a VISCA-compatible camera and could test it (either via the demo app or your own code) I’d be really interested to hear how you get on with it. I believe the VISCA protocol is fairly well standardized, but I wouldn’t be surprised if there were some corner cases such as maximum pan/tilt/zoom values that need to be queried rather than hard-coded.

A Tour of the .NET Functions Framework

Note: all the code in this blog post is available in my DemoCode GitHub repo, under Functions.

For most of 2020, one of the projects I’ve been working on is the .NET Functions Framework. This is the .NET implementation of the Functions Framework Contract… but more importantly to most readers, it’s “the way to run .NET code on Google Cloud Functions” (aka GCF). The precise boundary between the Functions Framework and GCF is an interesting topic, but I won’t be going into it in this blog post, because I’m basically more excited to show you the code.

The GitHub repository for the .NET Functions Framework already has a documentation area as well as a quickstart in the README, and there will be .NET instructions within the Google Cloud Functions documentation of course… but this post is more of a tour from my personal perspective. It’s “the stuff I’m excited to show you” more than anything else. (It also highlights a few of the design challenges, which you wouldn’t really expect documentation to do.) It’s likely to form the basis of any conference or user group talks I give on the Functions Framework, too. Oh, and in case you hadn’t already realized – this is a pretty long post, so be warned!

Introduction to Functions as a Service (Faas)

This section is deliberately short, because I expect many readers will already be using FaaS either with .NET on a competing cloud platform, or potentially with GCF and a different language. There are countless articles about FaaS which do a better job than I would. I’ll just make two points though.

Firstly, the lightbulb moment for me around functions as a production value proposition came in a conference talk (I can’t remember whose, I’m afraid) where the speaker emphasized that FaaS isn’t about what you can do with functions. There’s nothing (or maybe I should say “very little” to hedge my bets a bit) you can do with FaaS that you couldn’t do by standing up a service in a Kubernetes cluster or similar. Instead, the primary motivating factor is cost. The further you are away from the business side of things, the less that’s likely to impact on your thinking, but I do think it makes a huge difference. I’ve noticed this personally, which has helped my understanding: I have my own Kubernetes cluster in Google Kubernetes Engine (GKE) which runs jonskeet.uk, csharpindepth.com, nodatime.org and a few other sites. The cluster has three nodes, and I pay a fairly modest amount for it each month… but it’s running out of resources. I could reduce the redundancy a bit and perform some other tweaks, but fundamentally, adding a new test web site for a particular experiment has become tricky. Deploying a function, however, is likely to be free (due to the free tier) and will at worst be incremental.

Secondly, there’s a practical aspect I hadn’t considered, which is that deploying a function with the .NET Functions Framework is now my go-to way of standing up a simple server, even if it has nothing to do with typical functions use cases. Examples include:

  • Running some (fairly short-running) query benchmarks for Datastore to investigate a customer issue
  • Starting a server locally as a simple way of doing the OAuth2 dance when I was working out how to post to WordPress
  • Creating a very simple “current affairs aggregator” to scrape a few sites that I found myself going to repeatedly

Okay, I’m massively biased having written the framework, and therefore knowing it well – but even so, I’m surprised by the range of situations where having a simple way to deploy simple code is really powerful.

Anyway, enough with the background… let’s see how simple it really is to get started.

Getting started: part 1, installing the templates

Firstly, you need the .NET Core SDK version 3.1 or higher. I suspect that won’t rule out many of the readers of this blog :)

The simplest way of getting started is to use the templates NuGet package, so you can then create Functions projects using dotnet new. From a command line, install the templates package like this:

dotnet new -i Google.Cloud.Functions.Templates::1.0.0-beta02

(The ::1.0.0-beta02 part is just because it’s still in prerelease. When we’ve hit 1.0.0, you won’t need to specify the version.)

That installs three templates:

  • gcf-http (an HTTP-triggered function)
  • gcf-event (a strongly-typed CloudEvent-triggered function, using PubSub events in the template)
  • gcf-untyped-event (an “untyped” CloudEvent-triggered function, where you’d have to deserialize the CloudEvent data payload yourself)

All the templates are available for C#, VB and F#, but I’ll only focus on C# in this blog post.

In the current (October 2020) preview of Visual Studio 2019 (which I suspect will go GA in November with .NET 5) there’s an option to use .NET Core templates in the “File -> New Project” experience, and the templates work with that. You need to enable it in “Options -> Environment -> Preview Features -> Show all .NET Core templates in the New project dialog”. The text for the Functions templates needs a bit of an overhaul, but it’s nice to be able to do everything from Visual Studio after installing the templates. I’ll show the command lines for now though.

Getting started: part 2, hello world

I see no point in trying to be innovative here: let’s start with a function that just prints Hello World or similar. As luck would have it, that’s what the gcf-http template provides us, so we won’t actually need to write any code at all.

Again, from a command line, run these commands:

mkdir HelloWorld
cd HelloWorld
dotnet new gcf-http

You should see a confirmation message:

The template “Google Cloud Functions HttpFunction” was created successfully.

This will have created two files. First, HelloWorld.csproj:

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Google.Cloud.Functions.Hosting" Version="1.0.0-beta02" />
  </ItemGroup>
</Project>

And Function.cs:

using Google.Cloud.Functions.Framework;
using Microsoft.AspNetCore.Http;
using System.Threading.Tasks;

namespace HelloWorld
{
    public class Function : IHttpFunction
    {
        /// <summary>
        /// Logic for your function goes here.
        /// </summary>
        /// <param name="context">The HTTP context, containing the request and the response.</param>
        /// <returns>A task representing the asynchronous operation.</returns>
        public async Task HandleAsync(HttpContext context)
        {
            await context.Response.WriteAsync("Hello, Functions Framework.");
        }
    }
}

Right – you’re now ready to run the function. Once more, from the command line:

dotnet run

… the server should start, with log messages that are very familiar to anyone with ASP.NET Core experience along with an introductory log message that’s specific to the Functions Framework.

[Google.Cloud.Functions.Hosting.EntryPoint] [info] Serving function HelloWorld.Function

Point a browser at http://localhost:8080 and you should see the message of “Hello, Functions Framework.” Great!

You may be wondering exactly what’s going on at this point, and I promise I’ll come back to that. But first, let’s deploy this as a Google Cloud Function.

Getting started: part 3, Google Cloud Functions (GCF)

There are a few prerequisites. You need:

  • A Google Cloud Platform (GCP) project, with billing enabled (although as I mentioned earlier, experimentation with Functions is likely to all come within the free tier)
  • The Cloud Functions and Cloud Build APIs enabled
  • The Google Cloud SDK (gcloud)

Rather than give the instructions here, I suggest you go to the Java GCF quickstart docs and follow the first five steps of the “Creating a GCP project using Cloud SDK” section. Ignore the final step around preparing your development environment. I’ll update this post when the .NET quickstart is available.

Once all the prerequisites are available, the actual deployment is simple. From the command line:

gcloud functions deploy hello-world --runtime=dotnet3 --entry-point=HelloWorld.Function --trigger-http --allow-unauthenticated

That’s all on one line so that it’s simple to cut and paste even into the Windows command line, but it breaks down like this:

  • gcloud functions deploy – the command we’re running (deploy a function)
  • hello-world – the name of the function we’re creating, which will appear in the Functions console
  • --runtime=dotnet3 – we want to use the .NET runtime within GCF
  • --entry-point=HelloWorld.Function – this specifies the fully qualified name of the target function type.
  • --trigger-http – the function is triggered via HTTP requests (rather than events)
  • --allow-unauthenticated – the function can be triggered without authentication

Note: if you used a directory other than HelloWorld earlier, or changed the namespace in the code, you should adjust the --entry-point command-line argument accordingly. You need to specify the namespace-qualified name of your function type.

That command uploads your source code securely, builds it, then deploys it. (When I said that having the .NET Core SDK is a prerequisite, that’s true for the template and running locally… but you don’t need the SDK installed to deploy to GCF.)

The function will take a couple of minutes to deploy – possibly longer for the very first time, if some resources need to be created in the background – and eventually you’ll see all the details of the function written to the console. This is a bit of a wall of text, but you want to look for the httpsTrigger section and its url value. Visit that URL, and hey presto, you’re running a function.

If you’re following along but didn’t have any of the prerequisites installed, that may have taken quite a while – but if you’re already a GCP user, it’s really pretty quick.

Personal note: I’d love it if we didn’t need to specify the entry point on the command line, for projects with only one function. I’ve made that work when just running dotnet run, as we saw earlier, but currently you do have to specify the entry point. I have some possibly silly ideas for making this simpler – I’ll need to ask the team how feasible they are.

What’s in a name?

We’ve specified two names in the command line:

  • The name of the function as it will be shown within the Functions Console. (This is hello-world in our example.)
  • The name of the class implementing the function, specified using --entry-point. (This is HelloWorld.Function in our example.)

When I started working with Google Cloud Functions, I got a bit confused by this, and it seems I’m not the only one.

The two names really are independent. We could have deployed the same code multiple times to create several different functions listening on several different URLs, but all specifying the same entry point. Indeed, I’ve done this quite a lot in order to explore the exact HTTP request used by Pub/Sub, Storage and Firebase event triggers: I’ve got a single project with a function class called HttpRequestDump.Function, and I’ve deployed that multiple times with functions named pubsub-test, storage-test and so on. Each of those functions is then independent – they have separate logs, I can delete one without it affecting the others, etc. You could think of them as separate named “instances” of the function, if you want.

What’s going on? Why don’t I need a Main method?

Okay, time for some explanations… at least of the .NET side of things.

Let’s start with the packages involved. The Functions Framework ships four packages:

  • Google.Cloud.Functions.Framework
  • Google.Cloud.Functions.Hosting
  • Google.Cloud.Functions.Testing
  • Google.Cloud.Functions.Templates

We’ve already seen what the Templates package provides, and we’ll look at Testing later on.

The separation between the Hosting package and the Framework package is perhaps a little arbitrary, and I expect it to be irrelevant to most users. The Framework package contains the interfaces that functions need to implement, and adapters between them. If you wanted to host a function yourself within another web application, for example, you could depend just on the Framework package, and your function could have exactly the same code as it does otherwise.

The Hosting package is what configures and starts the server in the more conventional scenario, and this is the package that the “normal” functions deployment scenario will depend on. (If you look at the project file from earlier, you’ll see that it depends on the Hosting package.)

While the Hosting package has become a bit more complex over the course of the alpha and beta releases, it’s fundamentally very small considering what it does – and that’s all because it builds on the foundation of ASP.NET Core. I cannot stress this enough – without the fantastic work of the ASP.NET Core team, we wouldn’t be in this position now. (Maybe we’d have built something from scratch, I don’t know. I’m not saying there wouldn’t be a product, just that I really appreciate having this foundation to build on.)

None of that explains how we’re able to just use dotnet run without having a Program.cs or anything else with a Main method though. Sure, C# 9 has fancy features around top-level programs, but that’s not being used here. (I do want to see if there’s something we can do there, but that’s a different matter.)

This is where Project Dragonfruit comes in – inspirationally, at least. This is a relatively little-known project as part of the System.CommandLine effort; Scott Hanselman’s blog post on it sets the scene pretty well.

The cool thing about Project Dragonfruit is that you write a Main method that has the parameters you want with the types that you want. You can still use dotnet run, and all the parsing happens magically before it gets to your code. The magic is really in the MSBuild targets that come as part of the NuGet package. They generate a bit of C# code that first calls the parser and then calls your Main method, and set that generated code as the entry point.

My JonSkeet.DemoUtil NuGet package (which I really ought to document some time) does the same thing, allowing me to create a project with as many Main methods as I want, and then get presented with a menu of them when I run it. Perfect for demos in talks. (Again, this is copying the idea from Project Dragonfruit.)

And that’s basically what the Hosting package in the Functions Framework does. The Hosting package exposes an EntryPoint class with a StartAsync method, and there are MSBuild targets that automatically generate the entry point for you (if the consuming project is an executable, and unless you disable it).

You can find the generated entry point code in the relevant obj directory (e.g. obj/Debug/netcoreapp3.1) after building. The code looks exactly like this, regardless of your function:

// <auto-generated>This file was created automatically</auto-generated>
using System.Runtime.CompilerServices;
using System.Threading.Tasks;
[CompilerGenerated]
internal class AutoGeneratedProgram
{
    public static Task<int> Main(string[] args) =>
        Google.Cloud.Functions.Hosting.EntryPoint.StartAsync(
             typeof(global::AutoGeneratedProgram).Assembly, args);
}

Basically it calls EntryPoint.StartAsync and passes in “the assembly containing the function” (and any command line arguments). Everything else is done by EntryPoint.

We’ll see more of the features of the Hosting package later on, but at least this has answered the question of how dotnet run works with our HelloWorld function.

Testing HelloWorld

Okay, so we’ve got HelloWorld to run locally, and we’ve deployed it successfully… but are we convinced it works? Well yes, I’m pretty sure it does, but even so, it would be nice to test that.

I’m a big fan of “testing” packages – additional NuGet packages to make it easier to use code that works with that core package. So for example, with NodaTime there’s a NodaTime.Testing package, which we’ll actually use later in this blog post. I don’t know where I got the name “testing” from – it may have been an internal Google convention that I decided to use from NodaTime – but the concept is really handy.

As I mentioned earlier, there’s a Google.Cloud.Functions.Testing package, and now I’ve explained the naming convention you can probably guess that it’s going to get involved.

The Testing package provides:

  • An in-memory ILogger and ILoggerProvider so you can easily unit test functions that use logging, including testing the logs that are written. (IMO this should really be something available in ASP.NET Core out of the box.)
  • A simple way of creating a test server (using Microsoft.AspNetCore.TestHost), which automatically installs the in-memory logger.
  • A base class for tests that automatically creates a test server for a function, and exposes common operations such as “make a GET request and retrieve the text returned”.

Arguably it’s a bit unconventional to have a base class for tests like this. It’s entirely possible to use composition instead of inheritance. But my experience writing the samples for the Functions Framework led me to dislike the boilerplate code that came with composition. I don’t mind the bit of a code smell of using a base class, when it leads to simple tests.

I won’t go through all of the features in detail, but let’s look at the test for HelloWorld. There’s really not much to test, given that there’s no conditional logic – we just want to assert that when we make a request to the server, it writes out “Hello, Functions Framework.” in the response.

Just for variety, I’ve decided to use NUnit in the sample code for this blog post. Most of my tests for work code use xUnit these days, but nothing in the Testing package depends on actual testing packages, so it should work with any test framework you want.

Test lifecycle note: different test frameworks use different lifecycle models. In xUnit, a new test class instance is created for each test case, so we get a “clean” server each time. In NUnit, a single test fixture instance is created and used for all tests, which means there’s a single server, too. The server is expected to be mostly stateless, but if you’re testing against log entries in NUnit, you probably want a setup method. There’s an example later.

So we can set up the project simply:

mkdir HelloWorld.Tests
cd HelloWorld.Tests
dotnet new nunit -f netcoreapp3.1
dotnet add package Google.Cloud.Functions.Testing --version 1.0.0-beta02
dotnet add reference ../HelloWorld/HelloWorld.csproj

(I’d normally do all of this within Visual Studio, but the command line shows you everything you need in terms of project setup. Note that I’ve specified netcoreapp3.1 as the target framework simply because I’ve got the preview of .NET 5 installed, which leads to a default target of net5… and that’s incompatible with the function project.)

With the project in place, we can add the test itself:

using Google.Cloud.Functions.Testing;
using NUnit.Framework;
using System.Threading.Tasks;

namespace HelloWorld.Tests
{
    public class FunctionTest : FunctionTestBase<Function>
    {
        [Test]
        public async Task RequestWritesMessage()
        {
            string text = await ExecuteHttpGetRequestAsync();
            Assert.AreEqual("Hello, Functions Framework.", text);
        }
    }
}

The simplicity of testing is one of the things I’m most pleased with in the Functions Framework. In this particular case I’m happy to use the default URI (“sample-uri”) and a GET request, but there are other methods in FunctionTestBase to make more complex requests, or to execute CloudEvent functions.

So is this a unit test or an integration test? Personally I’m not too bothered by the terminology, but I’d call this an integration test in that it does check the integration through the Functions stack. (It doesn’t test integration with anything else because the function doesn’t integrate with anything else.) But it runs really quickly, and this is my “default” kind of test for functions now.

Beyond hello world: what’s the time?

Let’s move from a trivial function to a cutting-edge, ultra-complex, get-ready-for-mind-melting function… we’re going to report the current time. More than that, we’re going to optionally report the time in a particular time zone. (You knew I’d bring time zones into this somehow, right?)

Rather than walk you through every small step of the process of setting this up, I’ll focus on the interesting bits of the code. If you want to see the complete code, it’s in the ZoneClock and ZoneClock.Tests directories in GitHub.

Regular readers will be unsurprised that I’m going to use NodaTime for this. This short function will end up demonstrating plenty of features:

  • Dependency injection via a “Function Startup class”
  • Logger injection
  • Logger behaviour locally vs in GCF
  • Testing a function that uses dependency injection
  • Testing log output

Let’s start with the code itself. We’ll look at it in three parts.

First, the function class:

[FunctionsStartup(typeof(Startup))]
public class Function : IHttpFunction
{
    private readonly IClock clock;
    private readonly ILogger logger;

    // Receive and remember the dependencies.
    public Function(IClock clock, ILogger<Function> logger) =>
        (this.clock, this.logger) = (clock, logger);

    public async Task HandleAsync(HttpContext context)
    {
        // Implementation code we'll look at later
    }
}

Other than the attribute, this should be very familiar code to ASP.NET Core developers – our two dependencies (a clock and a logger) are provided in the constructor, and remembered as fields. We can then use them in the HandleAsync method.

For any readers not familiar with NodaTime, IClock is an interface with a single method: Instant GetCurrentInstant(). Any time you would call DateTime.UtcNow in DateTime-oriented code, you want to use a clock in NodaTime. That way, your time-sensitive code is testable. There’s a singleton implementation which simply delegates to the system clock, so that’s what we need to configure in terms of the dependency for our function, when running in production as opposed to in tests.

Dependency injection with Functions startup classes

Dependency injection is configured in the .NET Functions Framework using Functions startup classes. These are a little bit like the concept of the same name in Azure Functions, but they’re a little more flexible (in my view, anyway).

Functions startup classes have to derive from Google.Cloud.Functions.Hosting.FunctionsStartup (which is a regular class; the attribute is called FunctionsStartupAttribute, but C# allows you to apply the attribute just using FunctionsStartup and it supplies the suffix).

FunctionsStartup is an abstract class, but it doesn’t contain any abstract members. Instead, it has four virtual methods, each with a no-op implementation:

  • void ConfigureAppConfiguration(WebHostBuilderContext context, IConfigurationBuilder configuration)
  • void ConfigureServices(WebHostBuilderContext context, IServiceCollection services)
  • void ConfigureLogging(WebHostBuilderContext context, ILoggingBuilder logging)
  • void Configure(WebHostBuilderContext context, IApplicationBuilder app)

These will probably be familiar to ASP.NET Core developers – they’re the same configuration methods that exist on IWebHostBuilder.

A Functions startup class overrides one or more of these methods to configure the appropriate aspect of the server. Note that the final method (Configure) is used to add middleware to the request pipeline, but the Functions Framework expects that the function itself will be the last stage of the pipeline.

The most common method to override (in my experience so far, anyway) is ConfigureServices, in order to configure dependency injection. That’s what we need to do in our example, and here’s the class:

public class Startup : FunctionsStartup
{
    public override void ConfigureServices(WebHostBuilderContext context, IServiceCollection services) =>
        services.AddSingleton<IClock>(SystemClock.Instance);
}

This is the type referred to by the attribute on the function class:

[FunctionsStartup(typeof(Startup))]

Unlike “regular” ASP.NET Core startup classes (which are expected to configure everything), Functions startup classes can be composed. Every startup that has been specified either on the function type, or its based types, or the assembly, is used. If you need the startups to be applied in a particular order, you can specify that in the attribute.

Only the function type that is actually being served is queried for attributes. You could have two functions in the same project, and each of them have different startup class attributes… along with assembly attributes specifying any startup classes that both functions want.

Note: when running from the command line, you can specify the function to serve as a command line argument or an environment variable. The framework will fail to start (with a clear error) if you try to run a project with multiple functions, but without specifying which one you want to serve.

The composition aspect allows third parties to integrate with the .NET Functions Framework cleanly. For example, Steeltoe could provide a Steeltoe.GoogleCloudFunctions package containing a bunch of startup classes, and you could just specify (in attributes) which ones you wanted to use for any given function.

Our Startup class only configures the IClock dependency. It doesn’t need to configure ILogger, because ASP.NET Core does this automatically.

Finally, we can write the actual function body. This is reasonably simple. (Yes, it’s nearly 30 lines long, but it’s still straightforward.)

public async Task HandleAsync(HttpContext context)
{
    // Get the current instant in time via the clock.
    Instant now = clock.GetCurrentInstant();

    // Always write out UTC.
    await WriteTimeInZone(DateTimeZone.Utc);

    // Write out the current time in as many zones as the user has specified.
    foreach (var zoneId in context.Request.Query["zone"])
    {
        var zone = DateTimeZoneProviders.Tzdb.GetZoneOrNull(zoneId);
        if (zone is null)
        {
            logger.LogWarning("User provided invalid time zone '{id}'", zoneId);
        }
        else
        {
            await WriteTimeInZone(zone);
        }
    }

    Task WriteTimeInZone(DateTimeZone zone)
    {
        string time = LocalDateTimePattern.GeneralIso.Format(now.InZone(zone).LocalDateTime);
        return context.Response.WriteAsync($"Current time in {zone.Id}: {time}\n");
    }
}

I haven’t bothered to alert the user to the invalid time zone they’ve provided, although the code to do so would be simple. I have logged a warning – mostly so I can demonstrate logging.

The use of DateTimeZoneProviders.Tzdb is a slightly lazy choice here, by the way. I could inject an IDateTimeZoneProvider as well, allowing for tests with custom time zones. That’s probably overkill in this case though.

Logging locally and in production

So, let’s see what happens when we run this.

The warning looks like this:

2020-10-21T09:53:45.334Z [ZoneClock.Function] [warn] User provided invalid time zone 'America/Metropolis'

This is all on one line: the console logger used by default by the .NET Functions Framework when running locally is a little more compact than the default console logger.

But what happens when we run in Google Cloud Functions? Let’s try it…

gcloud functions deploy zone-clock --runtime=dotnet3 --entry-point=ZoneClock.Function --allow-unauthenticated --trigger-http

If you’re following along and deploying it yourself, just visit the link shown in the gcloud output, and add ?zone=Europe/London&amp;zone=America/New_York to show the London and New York time zones, for example.

If you go to the Cloud Functions Console and select the zone-clock function, you can view the logs. Here are two requests:

(Click on each image for the full-sized screenshot.)

Warning logs in Functions console

Note how the default “info” logs are differentiated from the “warning” log about the zone ID not being found.

In the Cloud Logging Console you can expand the log entry for more details:

Warning logs in Logging console

You can easily get to the Cloud Logging console from the Cloud Functions log viewer by clicking on the link in top right of the logs. That will take you to a Cloud Logging page with a filter to show just the logs for the function you’re looking at.

The .NET Functions Framework detects when it’s running in a Knative environment, and writes structured JSON to the console instead of plain text. This is then picked up and processed by the logging infrastructure.

Testing with dependencies

So, it looks like our function does what we want it to, but it would be good to have tests to prove it. If we just use a FunctionTestBase like before, without anything else, we’d still get the production dependency being injected though, which would make it hard to write robust tests.

Instead, we want to specify different Functions startup classes for our tests. We want to use a different IClock implementation – a FakeClock from the NodaTime.Testing package. That lets us create an IClock with any time we want. Let’s set it to June 3rd 2015, 20:25:30 UTC:

class FakeClockStartup : FunctionsStartup
{
    public override void ConfigureServices(WebHostBuilderContext context, IServiceCollection services) =>
        services.AddSingleton<IClock>(new FakeClock(Instant.FromUtc(2015, 6, 3, 20, 25, 30)));
}

So how do we tell the test to use that startup? We could manually construct a FunctionTestServer and set the startups that way… but it’s much more convenient to use the same FunctionsStartupAttribute as before, but this time applied to the test class:

[FunctionsStartup(typeof(FakeClockStartup))]
public class FunctionTest : FunctionTestBase<Function>
{
    // Tests here
}

(In my sample code, FakeClockStartup is a nested class inside the test class, whereas the production Startup class is a top-level class. There’s no specific reason for this, although it feels reasonably natural to me. You can organize your startup classes however you like.)

If you have any startup classes which should be used by all the tests in your test project, you can apply FunctionsStartupAttribute to the test assembly.

The tests themselves check two things:

  • The output that’s written to the HTTP response
  • The log entries written by the function (but not by other loggers)

Again, FunctionTestBase makes the latter easy, with a GetFunctionLogEntries() method. (You can get at all the logs if you really want to, of course.)

I’ve actually got three tests, but one will suffice to show the pattern:

[Test]
public async Task InvalidCustomZoneIsIgnoredButLogged()
{
    string actualText = await ExecuteHttpGetRequestAsync("?zone=America/Metropolis&zone=Europe/London");
    // We still print UTC and Europe/London, but America/Metropolis isn't mentioned at all.
    string[] expectedLines =
    {
        "Current time in UTC: 2015-06-03T20:25:30",
        "Current time in Europe/London: 2015-06-03T21:25:30"
    };
    var actualLines = actualText.Split('\n', StringSplitOptions.RemoveEmptyEntries);
    Assert.AreEqual(expectedLines, actualLines);

    var logEntries = GetFunctionLogEntries();
    Assert.AreEqual(1, logEntries.Count);
    var logEntry = logEntries[0];
    Assert.AreEqual(LogLevel.Warning, logEntry.Level);
    StringAssert.Contains("America/Metropolis", logEntry.Message);
}

As a side-note, I generally prefer NUnit over xUnit, but I really wanted to
be able to write:

// Would be valid in xUnit...
var logEntry = Assert.Single(GetFunctionLogEntries());

In xUnit the Assert.Single method validates that its input (GetFunctionLogEntries() in this case) contains a single element, and returns that element so you can perform further assertions on it. There’s no equivalent in NUnit that I’m aware of, although it would be easy to write one.

As noted earlier, we also need to make sure that the logs are cleared before the start of each test, which we can do with a setup method:

[SetUp]
public void ClearLogs() => Server.ClearLogs();

(The Server property in FunctionTestBase is the test server that it
creates.)

Okay, so that’s HTTP functions… what else can we do?

CloudEvent functions

Functions and events go together very naturally. Google Cloud Functions can be triggered by various events, and in the .NET Functions Framework these are represented as CloudEvent functions.

CloudEvents is a CNCF project to standardize the format in which events are propagated and delivered. It isn’t opinionated about the payload data, or how the events are stored etc, but it provides a common “envelope” model, and specific requirements of how events are represented in transports such as HTTP.

This means that you can write at least some code to handle “any event”, and the overall structure should be familiar even if you move between (say) Microsoft-generated and Google-generated events. For example, if both Google Cloud Storage and Azure Blob Storage can emit events (e.g. when an object/blob is created or deleted) then it should be easy enough to consume that event from Azure or Google Cloud Platform respectively. I wouldn’t expect it to be the same code for both kinds of event, but at least the deserialization part of “I have an HTTP request; give me the event information” would be the same. In C#, that’s handled via the C# CloudEvents SDK.

If you’re happy deserializing the data part yourself, that’s all you need, and you can write an untyped CloudEvent function like this:

public class Function : ICloudEventFunction
{
    public Task HandleAsync(CloudEvent cloudEvent, CancellationToken cancellationToken)
    {
        // Function body
    }
}

Note how there’s no request and response: there’s just the event.

That’s all very well, but what if you don’t want to deserialize the data yourself? I don’t want users to have to write their own representation of (say) our Cloud Pub/Sub message event data. I want to make it as easy as possible to consume Pub/Sub messages in functions.

That’s where two other repositories come in:

The latter repository provides two packages at the moment: Google.Events and Google.Events.Protobuf. You can add a dependency in your functions project to Google.Events.Protobuf, and then write a typed CloudEvent function like this:

public class Function : ICloudEventFunction<MessagePublishedData>
{
    public Task HandleAsync(CloudEvent cloudEvent, MessagePublishedData data, CancellationToken cancellationToken)
    {
        // Function body
    }
}

Your function is still provided with the original CloudEvent so it can access metadata, but the data itself is deserialized automatically.

Serialization library choices

There’s an interesting design issue here. The schemas for the event data are originally in protobuf format, and we’re also converting them to JSON schema. It would make sense to be able to deserialize with any of:

  • Google.Protobuf
  • System.Text.Json
  • Newtonsoft.Json

If you’re already using one of those dependencies elsewhere in your code, you probably don’t want to add another of them. So the current plan is to provide three different packages, one for each deserialization library. All of them apply common attributes from the Google.Events package, which has no dependencies itself other than the CloudEvents SDK, and is what the Functions Framework depends on.

Currently we’ve only implemented the protobuf-based option, but I do want to get to the others.

(Note that currently the CloudEvents SDK itself depends on Newtonsoft.Json, but I’m hoping we can remove that dependency before we release version 2.0 of the CloudEvents SDK, which I’m working on jointly with Microsoft.)

That all sounds great, but it means we’ve got three different representations of MessagePublishedData – one for each serialization technology. It would be really nice if we could have just one representation, which all of them deserialized to, based on which serialization package you happened to use. That’s an issue I haven’t solved yet.

I’m hoping that in the world of functions that won’t matter too much, but of course CloudEvents can be produced and consumed in just about any code… and at the very least, it’s a little annoying.

Writing CloudEvent functions

I’m not going to present the same sort of “hello world” experience for CloudEvent functions as for HTTP functions, simply because they’re less “hands on”. Even I don’t get too excited by publishing a Pub/Sub message and seeing a log entry that says “I received a Pub/Sub message with at this timestamp.”

Instead, I’ll draw your attention to an example with full code in the .NET Functions Framework repository.

It’s an example which is in some ways quite typical of how I see CloudEvent functions being used – effectively as plumbing between other APIs. This particular examples listens for Google Cloud Storage events where an object has been created or updated, and integrates it with the Google Cloud Vision API to perform image recognition and annotation. The steps involved are:

  • The object is created or updated in a Storage bucket
  • An event is generated, which triggers the CloudEvent function
  • The function checks the content type and filename, to see whether it’s probably an image. (If it isn’t, it stops at this point.)
  • It asks the Vision API to perform some basic image recognition, looking for faces, text, landmarks and so on.
  • The result is summarised in a “text file object” which is created alongside the original image file.

The user experience is that they can drop an image into Storage bucket, and a few seconds later there’s a second file present with information about the image… all in a relatively small amount of code.

The example should be easy to set up, assuming you have both Storage and Vision APIs enabled – it’s then very easy to test. While you’re looking at that example, I encourage you to look at the other examples in the repository, as they show some other features I haven’t covered.

Of course, all the same testing features for HTTP functions are available for CloudEvent functions too, and there are helper methods in FunctionTestBase to execute the function based on an event and so on. Admittedly API-like dependencies tend to be harder to take out than IClock, but the function-specific mechanisms are still the same.

Conclusion

It’s been so much fun to describe what I’ve been working on, and how I’ve tried to predict typical use cases and make them easy to implement with the .NET Functions Framework.

The framework is now in beta, which means there’s still time to make some changes if we want to… but we won’t know the changes are required unless we get feedback. So I strongly encourage you to give it a try, whether you have experience of FaaS on other platforms or not.

Feedback is best left via issues on the GitHub repository – I’d love to be swamped!

I’m sure there’ll be more to talk about in future blog posts, but this one is already pretty gigantic, so I’ll leave it there for now…

Posting to wordpress.com in code

History

I started blogging back in 2005, shortly before attending the only MVP summit I’ve managed to go to. I hosted the blog on msmvps.com, back when that was a thing.

In 2014 I migrated to wordpress.com, in the hope that this would make everything nice and simple: it’s a managed service, dedicated to blogging, so I shouldn’t have to worry about anything but the writing. It’s not been quite that simple.

I don’t know when I started writing blog posts in Markdown instead of using Windows Live Writer to create the HTML for me, but it’s definitely my preferred way of writing. It’s the format I use all over the place, it makes posting code easy… it’s just “the right format” (for me).

Almost all my problems with wordpress.com have fallen into one of two categories:

  • Markdown on WordPress (via JetPack, I believe) not quite working as I expect it to.
  • The editor on wordpress.com being actively hostile to Markdown users

In the first category, there are two problems. First, there’s my general annoyance at line breaks being relevant outside code. I like writing paragraphs including line breaks, so that the text is nicely in roughly 80-100 character lines. Unfortunately both WordPress and GitHub decide to format such paragraphs as multiple short lines, instead of flowing a single paragraph. I don’t know why the decision was made to format things this way, and I can see some situations in which it’s beneficial (e.g. a diff of “adding a single word” showing as just that diff rather than all the lines in the paragraph changing) but I mostly dislike it.

The second annoyance is that angle brackets in code (either code fences or just in backticks) behave unpredictably in WordPress, in a way that I don’t remember seeing anywhere else. The most common cause of having to update a post is to fix some generics in C# code, mangling to Markdown to escape the angle brackets. One of these days I may try to document this so that I can get it right in future posts, but it’s certainly a frustration.

I don’t expect to be able to do anything about either of these aspects. I could potentially run posts through some sort of preprocessor, but I suspect tha unwrapping paragraphs but not code blocks could get fiddly pretty fast. I can live with it.

The second category of annoyance – editing on wordpress.com – is what this post is mostly about.

I strongly suspect that most bloggers want a reasonably-WYSIWYG experience, and they definitely don’t want to see their post in its raw, unformatted version (usually HTML, but Markdown for me). For as long as I can remember, there have been two modes in the wordpress.com editor: visual and text. In some cases just going into the visual editor would cause the Markdown to be converted into HTML which would then show up in the text editor… it’s been fiddly to keep it as text. My habit is to keep a copy of the post as text (originally just in StackEdit but now in GitHub) and copy the whole thing into WordPress any time I wanted to edit anything. That way I don’t really care what WordPress does with it.

However, wordpress.com have now made even that workflow even harder – they’ve moved to a “blocks” editor in the easy-to-get-to UI, and you can only get to the text editor via the admin UI.

I figured enough was enough. If I’ve got the posts as text locally (then stored on GitHub), there’s no need to go to the wordpress.com UI for anything other than comments. Time to crack open the API.

What no .NET package?

WordPress is a pretty common blogging platform, let’s face it. I was entirely unsurprised to find out that there’s a REST API for it, allowing you to post to it. (The fact that I’d been using StackEdit to post for ages was further evidence of that.) It also wasn’t surprising that it used OAuth2 for authentication, given OAuth’s prevalance.

What was surprising was my inability to find any .NET packages to let me write a C# console application to call the API with really minimal code. I couldn’t even find any simple “do the OAuth dance for me” libraries that would work in a console application rather than in a web app. RestSharp looked promising, as the home page says “Basic, OAuth 1, OAuth 2, JWT, NTLM are supported” – but the authentication docs could do with some love, and looking at the source code suggested there was nothing that would start a local web server just to receive the OAuth code that could then be exchanged for a full auth token. (I know very little about OAuth2, but just enough to be aware of what’s missing when I browse through some library code.) WordPressPCL also looked promising – but requires JWT authentication, which is available via a plugin. I don’t want to upgrade from a personal wordpress.com account to a business account just for the sake of installing a single plugin. (I’m aware it could have other benefits, but…)

So, I have a few options:

  • Upgrade to a business account, install the JWT plugin, and try to use WordPressPCL
  • Move off wordpress.com entirely, run WordPress myself (or find another site like wordpress.com, I suppose) and make the JWT plugin available, and again use WordPressPCL
  • Implement the OAuth2 dance myself

Self-hosting WordPress

I did toy with the idea of running WordPress myself. I have a Google Kubernetes Engine cluster already, that I use to host nodatime.org and some other sites. I figured that by now, installing WordPress on a Kubernetes cluster would be pretty simple. It turns out there’s a Bitnami Helm chart for it, so I decided to give that a go.

First I had to install Helm – I’ve heard of it, but never used it before. My first attempt to use it, via a shell script, failed… but with Chocolatey, it installed okay.

Installing WordPress was a breeze – until it didn’t actually work, because my Kubernetes cluster doesn’t have enough spare resources. It is a small cluster, certainly – it’s not doing anything commercial, and I’m paying for it out of my own pocket, so I try to keep the budget relatively low. Apparently too low.

I investigated how much it might cost to increase the capacity of my cluster so I could run WordPress myself, and when it ended up being more expensive than the business account on wordpress.com (even before the time cost of maintaining the site), I figured I’d stop going down that particular rabbit hole.

Implementing OAuth2

In the end, I really shouldn’t have been so scared of implementing the OAuth2 dance myself. It’s not too bad, particularly when I’m happy to do a few manual steps each time I need a new token, rather than automating everything.

First I had to create an “application” on wordpress.com. That’s really just a registration for a client_secret and client_id, along with approved redirect URIs for the OAuth dance. I knew I’d be running a server locally for the browser to redirect to, so I allowed http://127.0.0.1:8080/auth as a redirect URI, and created the app appropriately.

The basic flow is:

  • Start a local web server to receive a redirect response from the WordPress server
  • Visit a carefully-constructed URL on WordPress in the browser
  • Authorize the request in the browser
  • The WordPress response indicates a redirect to the local server, that includes a code
  • The local server then exchanges that code for a token by making another HTTP request to the WordPress server
  • The local server displays the access token so I can copy and paste it for use elsewhere

In a normal application the user never needs to see the access token of course – all of this happens behind the scenes. However, doing that within my eventual “console application which calls the WordPress API to create or update posts” would be rather more hassle than copy/paste and hard-coding the access token. Is this code secure, if it ever gets stolen? Absolutely not. Am I okay with the level of risk here? Yup.

So, what’s the simplest way of starting an HTTP server in a standalone app? (I don’t need this to integrate with anything else.) You could obviously create a new empty ASP.NET Core application and find the right place to handle the request… but personally I reached for the .NET Functions Framework. I’m clearly biased as the author of the framework, but I was thrilled to see how easy it was to use for a real task. The solution is literally a single C# file and a project file, created with dotnet new gcf-http. The C# file contains a single class (Function) with a single method (HandleAsync). The C# file is 50 lines of code in total.

Mind you, it still took over an hour to get a working token that was able to create a WordPress post. Was this due to intricacies of URL encoding in forms? No, despite my investigations taking me in that direction. Was it due to needing to base64 encode the token when making a request? No, despite many attempts along those lines too.

I made two mistakes:

  • In my exchange-code-for-token server, I populated the redirect_uri field in the exchange request with "http://127.0.0.1/auth" instead of "http://127.0.0.1:8080/auth"
  • In the test-the-token application, I specified a scheme of "Basic" instead of "Bearer" in AuthenticationHeaderValue

So just typos, basically. Incredibly frustrating, but I got there.

As an intriguing thought, now I’ve got a function that can do the OAuth dance, there’s nothing to stop me deploying that as a real Google Cloud Function so I could get an OAuth access token at any time just by visiting a URL without running anything locally. I’d just need a bit of configuration – which ASP.NET Core makes easy, of course. No need to do that just yet.

Posting to WordPress

At this point, I have a test application that can create a WordPress post (as Markdown, importantly). It can update the post as well.

The next step is to work out what I want my blogging flow to be in the future. Given that I’m storing the blog content in GitHub, I could potentially trigger the code from a GitHub action – but I’m not sure that’s a particularly useful flow. For now, I’m going to go with “explicitly running an app when I want to create/update a post”.

Now updating a post requires knowing the post ID – which I can get within the WordPress UI, but I also get when creating the post in the first place. But I’d need somewhere to store it. I could create a separate file with metadata for posts, but this is all starting to sound pretty complex.

Instead, my current solution is to have a little metadata “header” before the main post. The application can read that, and process it appropriately. It can also update it with the post ID when it first creates the post on wordpress.com. That also avoids me having to specify things like a title on the command line. At the time of writing this, this post has a header like this:

title: Posting to wordpress.com in code
categories: C#, General
---

After running my application for the first time, I expect it to be something like this:

postId: 12345
title: Posting to wordpress.com in code
categories: C#, General
---

The presence of the postId field will trigger the app to use “update” instead of “create” next time I ask it to process this file.

Will it work? I’ll find out in just a few minutes. This code hasn’t been run at all yet. Yes, I could write some tests for it. No, I’m not actually going to write the tests. I think it’ll be just as quick to iterate on it by trial and error. (It’s not terribly complicated code.)

Conclusion

If you can see this post, I have a new process for posting to my blog. I will absolutely not create this post manually – if the code never works, you’ll never see this text.

Is this a process that other people would want to use? Maybe, maybe not. I’m not expecting to open source it. But it’s a useful example of how it really doesn’t take that much effort to automate away some annoyance… and I was able to enjoy using my own Functions Framework for realsies, which is a bonus :)

Time to post!

Travis logs and .NET Core console output

This is a blog post rather than a bug report, partly because I really don’t know what’s at fault. Others with more knowledge of how the console works in .NET Core, or exactly what the Travis log does, might be able to dig deeper.

TL;DR: If you’re running jobs using .NET Core 3.1 on Travis and you care about the console output, you might want to set the TERM environment variable to avoid information being swallowed.

Much of my time is spent in the Google Cloud Libraries for .NET repository. That single repository hosts a lot of libraries, and many of the pull requests are from autogenerated code where the impact on the public API surface may not be immediately obvious. (It would be easy to miss one breaking change within dozens of comment changes, for example.) Our Travis build includes a job to work out the public API changes, which is fantastically useful. (Example)

When we updated our .NET Core SDK to 3.1 – or at least around that time; it may have been coincidence – we noticed that some of the log lines in our Travis jobs seemed to be missing. They were actually missing from all the jobs, but it was particularly noticeable for that “change detection” job because the output can often be small, but should always contain a “Diff level” line. It’s really obvious when that line is missing.

I spent rather longer trying to diagnose what was going wrong than I should have done. A colleague noted that clicking on “Raw log” showed that we were getting all the output – it’s just that Travis was swallowing some of it, due to control characters being emitted. This blog post is a distillation of what I learned when trying to work out what was going on.

A simple set of Travis jobs

In my DemoCode repository I’ve created a Travis setup for the sake of this post.

Here are the various files involved:

.travis.yml:

dist: xenial  

language: csharp  
mono: none  
dotnet: 3.1.301  

jobs:  
  include:  
    - name: "Default terminal, no-op program"  
      script: TravisConsole/run-dotnet.sh 0  

    - name: "Default terminal, write two lines"  
      script: TravisConsole/run-dotnet.sh 2  

    - name: "Mystery terminal, no-op program"  
      env: TERM=mystery  
      script: TravisConsole/run-dotnet.sh 0  

    - name: "Mystery terminal, write two lines"  
      env: TERM=mystery  
      script: TravisConsole/run-dotnet.sh 2  

    - name: "Mystery terminal, write two lines, no logo"  
      env: TERM=mystery DOTNET_NOLOGO=true  
      script: TravisConsole/run-dotnet.sh 2

TravisConsole/run-dotnet.sh:

#!/bin/bash  

set -e  

cd $(readlink -f $(dirname ${BASH_SOURCE}))  

echo "Before dotnet run (first)"  
dotnet run -- $1  
echo "After dotnet run (first)"  

echo "Before dotnet run (second)"  
dotnet run -- $1  
echo "After dotnet run (second)"

TravisConsole/Program.cs:

using System;  

class Program  
{  
    static void Main(string[] args)  
    {  
        int count = int.Parse(args[0]);  
        for (int i = 1; i <= count; i++)  
        {  
             Console.WriteLine($"Line {i}");  
        }  
    }  
}

So each job runs the same .NET Core console application twice with the same command line argument – either 0 (in which case nothing is printed out) or 2 (in which case two it prints out “Line 1” then “Line 2”). The shell script also logs before and after executing the console application. The only other differences are in terms of the environment variables:

  • Some jobs use TERM=mystery instead of the default
  • The final job uses DOTNET_NOLOGO=true

I’ll come back to the final job right at the end – we’ll concentrate on the impact of the TERM environment variable first, as that’s the main point of the post. Next we’ll look at the output of the jobs – in each case showing it in the “pretty” log first, then in the “raw” log. The pretty log has colour, and I haven’t tried to reproduce that. I’ve also only shown the relevant bit – the call to run-dotnet.sh.

You can see all of the output shown here in the Travis UI, of course.

Job 1: Default terminal, no-op program

Pretty log

$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Before dotnet run (second)
The command "TravisConsole/run-dotnet.sh 0" exited with 0.

Note the lack of After dotnet run in each case.

Raw log

[0K$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=After dotnet run (first)
Before dotnet run (second)
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=After dotnet run (second)
travis_time:end:18aa556c:start=1595144448336834755,finish=1595144452475616837,duration=4138782082,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 0" exited with 0.[0m

In the raw log, we can see that After dotnet run is present each time, but with [?1h=[?1h=[?1h=[?1h=[?1h=[?1h=[?1h= before it. Let’s see what happens when our console application actually writes to the console.

Job 2: Default terminal, write two lines

Pretty log

$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Line 2
Before dotnet run (second)
Line 2
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

This time we don’t have After dotnet run – and we don’t have Line 1 either. As expected, they are present in the raw log, but with control characters before them:

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=Line 1
Line 2
[?1h=After dotnet run (first)
Before dotnet run (second)
[?1h=[?1h=[?1h=[?1h=[?1h=[?1h=Line 1
Line 2
[?1h=After dotnet run (second)
travis_time:end:00729828:start=1595144445905196926,finish=1595144450121508733,duration=4216311807,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Now let’s try with the TERM environment variable set.

Job 3: Mystery terminal, no-op program

$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
After dotnet run (first)
Before dotnet run (second)
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 0" exited with 0.

That’s more like it! This time the raw log doesn’t contain any characters within the script execution itself. (There are still blank lines in the “logo” part, admittedly. Not sure why, but we’ll get rid of that later anyway.)

[0K$ TravisConsole/run-dotnet.sh 0
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
After dotnet run (first)
Before dotnet run (second)
After dotnet run (second)
travis_time:end:11222e41:start=1595144449188901003,finish=1595144453242229433,duration=4053328430,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 0" exited with 0.[0m

Let’s just check that it still works with actual output:

Job 4: Mystery terminal, write two lines

Pretty log

4.45s$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Welcome to .NET Core 3.1!
---------------------
SDK Version: 3.1.301
----------------
Explore documentation: https://aka.ms/dotnet-docs
Report issues and find source on GitHub: https://github.com/dotnet/core
Find out what's new: https://aka.ms/dotnet-whats-new
Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https
Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs
Write your first app: https://aka.ms/first-net-core-app
--------------------------------------------------------------------------------------
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

Exactly what we’d expect from inspection. The raw log doesn’t hold any surprises either.

Raw log

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)

Welcome to .NET Core 3.1!

---------------------

SDK Version: 3.1.301

----------------

Explore documentation: https://aka.ms/dotnet-docs

Report issues and find source on GitHub: https://github.com/dotnet/core

Find out what's new: https://aka.ms/dotnet-whats-new

Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https

Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs

Write your first app: https://aka.ms/first-net-core-app

--------------------------------------------------------------------------------------
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
travis_time:end:0203f787:start=1595144444502091825,finish=1595144448950945977,duration=4448854152,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Job 5: Mystery terminal, write two lines, no logo

While job 4 is almost exactly what we want, it’s still got the annoying “Welcome to .NET Core 3.1!” section. That’s a friendly welcome for users in an interactive context, but pointless for continuous integration. Fortunately it’s now easy to turn off by setting DOTNET_NOLOGO=true. We now have exactly the log we’d want:

Pretty log

$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
The command "TravisConsole/run-dotnet.sh 2" exited with 0.

Raw log

[0K$ TravisConsole/run-dotnet.sh 2
Before dotnet run (first)
Line 1
Line 2
After dotnet run (first)
Before dotnet run (second)
Line 1
Line 2
After dotnet run (second)
travis_time:end:0bb5a6d4:start=1595144448986411002,finish=1595144453476210113,duration=4489799111,event=script
[0K[32;1mThe command "TravisConsole/run-dotnet.sh 2" exited with 0.[0m

Conclusion

The use of mystery as the value of the TERM environment variable isn’t special, other than “not being a terminal that either Travis or .NET Core will have any fixed expectations about”. I expect that .NET Core is trying to be clever with its output based on the TERM environment variable, and that Travis isn’t handling the control characters in quite the way that .NET Core expects it to. Which one is right, and which one is wrong? It doesn’t really matter to me, so long as I can fix it.

This does potentially have a cost, of course. Anything which would actually produce prettier output based on the TERM environment variable is being hampered by this change. But so far we haven’t seen any problems. (It certainly isn’t stopping our Travis logs from using colour, for example.)

I discovered the DOTNET_NOLOGO environment variable – introduced in .NET Core 3.1.301, I think – incidentally while researching this problem. It’s not strictly related to the core problem, but it is related to the matter of “making CI logs readable” so I thought I’d include it here.

I was rather surprised not to see complaints about this all over the place. As you can see from the code above, it’s not like I’m doing anything particularly “special” – just writing lines out to the console. Are other developers not having the same problem, or just not noticing the problem? Either way, I hope this post helps either the .NET Core team to dive deeper, find out what’s going on and fix it (talking to the Travis team if appropriate), or at least raise awareness of the issue so that others can apply the same workaround.

V-Drum Explorer: Blazor and the Web MIDI API

Blazor and the Web MIDI API

Friday, 9pm

Yesterday, speaking to the NE:Tech user group about V-Drum Explorer, someone mentioned the Web MIDI API– a way of accessing local MIDI devices from a browser.

Now my grasp of JavaScript is tenuous at best… but that’s okay, because I can write C# using Blazor. So in theory, I could build an equivalent to V-Drum Explorer, but running entirely in the browser using WebAssembly. That means I’d never have to worry about the installer again…

Now, I don’t want to get ahead of myself here. I suspect that WPF and later MAUI are still the way forward, but this should at least prove a fun investigation. I’ve never used the Web MIDI API, and I haven’t used Blazor for a few years. This weekend I’m sure I can find a few spare hours, so let’s see how far I can get.

Just for kicks, I’m going to write up my progress in this blog post as I go, adding a timestamp periodically so we can see how long it takes to do things (admittedly whilst writing it up at the same time). I promise not to edit this post other than for clarity, typos etc – if my ideas turn out to be complete failures, such is life.

I have a goal in mind for the end of the weekend: a Blazor web app, running locally to start with (deploying it to k8s shouldn’t be too hard, but isn’t interesting at this point), which can detect my drum module and list the names of the kits on the module.

Here’s the list of steps I expect to take. We’ll see how it goes.

  1. Use JSFiddle to try to access the Web MIDI API. If I can list the ports, open an input and output port, listen for MIDI messages (dumped to the console), and send a SysEx message hard-coded to request the name for kit 1.
  2. Start a new Blazor project, and check I can get it to work.
  3. Try to access the MIDI ports in Blazor – just listing the ports to start with.
  4. Expand the MIDI access test to do everything from step 1.
  5. Loop over all the kits instead of just the first one – this will involve doing checksum computation in the app, copying code from the V-Drum Explorer project. If I get this far, I’ll be very happy.
  6. As a bonus step, if I get this far, it would be really interesting to try to depend on V-Drum Explorer projects (VDrumExplorer.Model and VDrumExplorer.Midi) after modifying the MIDI project to use Web MIDI. At that point, the code for the Blazor app could be really quite simple… and displaying a read-only tree view probably wouldn’t be too hard. Maybe.

Sounds like I have a fun weekend ahead of me.

Saturday morning

Step 1: JSFiddle + MIDI

Time: 07:08

Turn on the TD-27, bring up the MIDI API docs and JSFiddle, and let’s give it a whirl…

It strikes me that it might be useful to be able to save some efforts here. A JSFiddle account may not be necessary for that, but it may make things easier… let’s create an account.

First problem: I can’t see how to make the console (which is where I expect all the results to end up) into more than a single line in the bottom right hand corner. I could open up Chrome’s console, of course, but as JSFiddle has one, it would be nice to use that. Let’s see what happens if I just write to it anyway… ah, it expands as it has data. Okay, that’ll do.

Test 1: initialize MIDI at all

The MIDI API docs have a really handy set of examples which I can just copy/paste. (I’m finding it hard to resist the temptation to change the whitespace to something I’m more comfortable with, but hey…)

So, copy the example in 9.1:

“Failed to get MIDI access – SecurityError: Failed to execute ‘requestMIDIAccess’ on ‘Navigator’: Midi has been disabled in this document by Feature Policy.”

Darn. Look up Feature-Policy on MDN, then a search for “JSFiddle Feature-Policy” finds https://github.com/jsfiddle/jsfiddle-issues/issues/1106 – which is specifically about MIDI access! And it has a workaround… apparently things work slightly differently with a saved Fiddle. Let’s try saving and reloading…

"MIDI ready!"

Hurray!

Test 2: list the MIDI ports

Copy/paste example 9.3 into the Fiddle (with a couple of extra lines to differentiate between input and output), and call listInputsAndOuptuts from onMIDISuccess

"MIDI ready!"
"Input ports"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Input port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output ports"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"
"Output port [type:'undefined'] id:'undefined' manufacturer:'undefined' name:'undefined' version:'undefined'"

Hmm. That’s not ideal. It’s clearly found some ports (six inputs and outputs? I’d only expect one or two), but it can’t use any properties in them.

If I add console.log(output) in the loop, it shows “entries”, “keys”, “values”, “forEach”, “has” and “get”, suggesting that the example is iterating over the properties of a collection rather than the entries.

Using for (var input in midiAccess.inputs.values()) still doesn’t give me anything obviously useful. (Keep in mind I know very little JavaScript – I’m sure the answer is obvious to many of you.)

Let’s try using forEach instead like this:

function listInputsAndOutputs( midiAccess ) {
  console.log("Input ports");
  midiAccess.inputs.forEach(input => {
    console.log( "Input port [type:'" + input.type + "'] id:'" + input.id +
      "' manufacturer:'" + input.manufacturer + "' name:'" + input.name +
      "' version:'" + input.version + "'" );
  });

  console.log("Output ports");
  midiAccess.outputs.forEach(output => {
    console.log( "Output port [type:'" + output.type + "'] id:'" + output.id +
      "' manufacturer:'" + output.manufacturer + "' name:'" + output.name +
      "' version:'" + output.version + "'" );
  });
}

Now the output is much more promising:

"MIDI ready!"
"Input ports"
"Input port [type:'input'] id:'input-0' manufacturer:'Microsoft Corporation' name:'5- TD-27' version:'10.0'"
"Output ports"
"Output port [type:'output'] id:'output-1' manufacturer:'Microsoft Corporation' name:'5- TD-27' version:'10.0'"

Test 3: dump MIDI messages to the console

I can just hard-code the input and output port IDs for now – when I get into C#, I can do something more reasonable.

Adapting example 9.4 from the Web MIDI docs very slightly, we get:

function logMidiMessage(message) {
var line = "MIDI message: "
for (var i = 0; i < event.data.length; i++) {
line += "0x" + event.data[i].toString(16) + " ";
}
console.log(line);
}

function onMIDISuccess(midiAccess) {
var input = midiAccess.inputs.get('input-0');
input.onmidimessage = logMidiMessage;
}

Now when I hit a drum, I see MIDI messages – and likewise when I make a change on the module (e.g. switching kit) that gets reported as well – so I know that SysEx messages are working.

Test 4: request the name of kit 1

Timestamp: 07:44

At this point, I need to go back to the V-Drum Explorer code and the TD-27 docs. The kit name is in the first 12 bytes of the KitCommon container, which is at the start of each Kit container. The Kit container for kit 1 starts at 0x04_00_00_00, so I just need to create a Data Request message for the 12 bytes starting at that address. I can do that just by hijacking a command in my console app, and getting it to print out the MIDI message. I need to send these bytes:

F0 41 10 00 00 00 63 11 04 00 00 00 00 00 00 0C 70 F7

That should be easy enough, adapting example 9.5 of the Web MIDI docs…

(Note of annoyance at this point: forking in JSFiddle doesn’t seem to be working properly for me. I get a new ID, but I can’t change the title in a way that shows up in “Your fiddles” properly. Ah – it looks like I need to do “fork, change title, set as base”. Not ideal, but it works.)

So I’d expect this code to work:

var output = midiAccess.outputs.get('output-1');
var requestMessage = [0xf0, 0x41, 0x10, 0x00, 0x00, 0x00, 0x63, 0x11, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0c, 0x70, 0xf7];
output.send(requestMessage);

But I don’t see any sign that the kit has sent back a response – and worse, if I add console.log("After send"); to the script, that doesn’t get logged either. Maybe it’s throwing an exception?

Aha – yes, there’s an exception:

Failed to execute ‘send’ on ‘MIDIOutput’: System exclusive message is not allowed at index 0 (240).

Ah, my requestMIDIAccess call wasn’t specifically requesting SysEx access. It’s interesting that it was able to receive SysEx messages even though it couldn’t send them.

After changing the call to pass { sysex: true }, I get back a MIDI message which looks like it probably contains the kit name. Hooray! Step 1 done :)

Timestamp: 08:08 (So all of this took an hour. That’s not too bad.)

Step 2: Vanilla Blazor project

Okay, within the existing VDrumExplorer solution, add a new project.

Find the Blazor project template, choose WebAssembly… and get interested by the “ASP.NET Core Hosted” option. I may want that eventually, but let’s not bother for now. (Side-thought: for the not-hosted version, I may be able to try it just by hosting the files in Google Cloud Storage. Hmmm.)

Let’s try to build and run… oh, it failed:

The "ResolveBlazorRuntimeDependencies" task failed unexpectedly.
error MSB4018: System.IO.FileNotFoundException: Could not load file or assembly 'VDrumExplorer.Blazor.dll' or one of its dependencies. The system cannot find the file specified.

That’s surprising. It’s also surprising that it looks like it’s got ASP.NET Core, given that I didn’t tick the box.

There’s a Visual Studio update available… maybe that will help? Upgrading from 16.6.1 to 16.6.3…

For good measure, let’s blow away the new project in case the project template has changed in 16.6.3.

Time to make a coffee…

Try again with the new version… nope, still failing in the same way.

I wonder whether I’ve pinned the .NET Core SDK to an older version and that’s causing a problem?

Ah, yes – there’s a global.json file in Drums, and that specifies 3.1.100.

Aha! Just updating that to use 3.1.301 works. A bit of time wasted, but not too bad.

Running the app now works, including hitting a breakpoint. Time to move onto MIDI stuff.

Timestamp: 08:33

Step 3: Listing MIDI ports in Blazor

Substep 1: create a new page

Let’s create a new Razor page. I’d have thought that would be “Add -> New Item -> Razor Page” but that comes up with a .cshtml file instead of the .razor file that everything else is.

Maybe despite being in a “Pages” directory with a .razor extension, these aren’t Razor Pages but Razor Component? Looks like it.

I’m feeling I could get out of my depth really rapidly here. If I were doing this “properly” I’d now read a bunch of docs on Razor. (I’ve been to various talks on it, and used it before, but I haven’t done either for quite a while.)

The “read up on the fundamentals first” and “hack, copy, paste, experiment” approaches to learning a new technology both have their place… I just generally feel a little less comfortable with the latter. It definitely gets to some results quicker, but doesn’t provide a good foundation for doing real work.

Still, I’m firmly in experimentation territory here, so hack on.

The new page has an “initialize MIDI” button, and two labels for input ports and output ports.

Add this to the nav menu, run it, and all seems well. (Eventually I may want to make this the default landing page, but that can come later.)

Time to dive into JS interop…

Substep 2: initialize MIDI

Let’s not rush to listing the ports – just initializing MIDI at all would be good. So add a status field and label, and start looking up JS interop.

I’ve heard of Blazor University before, so that’s probably a good starting point. And yes, there’s a section about JavaScript interop. It’s worryingly far down the TOC (i.e. I’m skipping an awful lot of other information to get that far) but we’ll plough on.

Calling the requestMIDIAccess function from InitializeMidi is relatively straightforward, with one caveat: I don’t know how to express the result type. I know it’s a JavaScript promise, but how do refer to that within the C# code? Let’s just use object to start with:

private async Task InitializeMidi()
{
    var promise = await JSRuntime.InvokeAsync<object>("navigator.requestMIDIAccess", TimeSpan.FromSeconds(3));
}

Looking more carefully at some docuemntation, it doesn’t look like I can effectively keep a reference to a JavaScript object within the C# code – everything is basically JSON serialized/deserialized across the boundary.

That’s fairly reasonable – but it means we’ll need to write more JavaScript code, I suspect.

Plan:

  • Write a bunch of JavaScript code in the Razor page. (Yes, I’d want to move it if I were doing this properly…)
  • Keep a global midi variable to keep “the initialized MIDI access”
  • Declare JavaScript functions for everything I need to do with MIDI, that basically proxy through the midi variable

I’d really hoped to avoid writing any JavaScript while running Blazor, but never mind.

Plan fails on first step: we’re not meant to write scripts within Razor pages. Okay, let’s create a midi.js script and include that in index.html.

Unfortunately, the asynchrony turns out to be tricky. We really want to be able to pass a callback to the JavaScript code, but that involves creating a DotNetObjectReference and managing lifetimes. That’s slightly annoying and fiddly.

I’ll come back to that eventually, but for now I can just keep all the state in JavaScript, and ask for the status after waiting for a few seconds:

private async Task InitializeMidi()
{
    await JSRuntime.InvokeAsync<object>("initializeMidi", TimeSpan.FromSeconds(3));
    await Task.Delay(3000);
    status = await JSRuntime.InvokeAsync<string>("getMidiStatus");
}

Result: yes, I can see that MIDI has been initialized. The C# code can fetch the status from the JavaScript.

That’s all the time I have for now – I have a meeting at 9:30. When I come back, I’ll look at making the JavaScript a bit cleaner, and writing a callback.

Timestamp: 09:25

Substep 3: use callbacks and a better library pattern

Timestamp: 10:55

Back again.

Currently my midi.js file just introduces functions into the global namespace. Let’s follow the W3C JavaScript best practices page guidance instead:

var midi = function() {
    var access = null;
    var status = "Uninitialized";

    function initialize() {
        success = function (midiAccess) {
            access = midiAccess;
            status = "Initialized";
        };
        failure = (message) => status = "Failed: " + message;
        navigator.requestMIDIAccess({ sysex: true })
            .then(success, failure);
    }

    function getStatus() {
        return status;
    }

    return {
        initialize: initialize,
        getStatus: getStatus
    };
}();

Is that actually any good? I really don’t know – but it’s at least good enough for now.

Next, let’s work out how to do a callback. Ideally, we’d be able to return something from the JavaScript initialize() method and await that. There’s an interesting blog post about doing just that, but it’s really long. (That’s not a criticism – it’s a great post that explains everything really well. It’s just it’s very involved.)

I suspect that a bit of hackery will allow a “simpler but less elegant” solution, which is fine by me. Let’s create a PromiseHandler class with a proxy object for JavaScript:

using Microsoft.JSInterop;
using System;
using System.Threading.Tasks;

namespace VDrumExplorer.Blazor
{
    public class PromiseHandler : IDisposable
    {
        public DotNetObjectReference<PromiseHandler> Proxy { get; }
        private readonly TaskCompletionSource<int> tcs;

        public PromiseHandler()
        {
            Proxy = DotNetObjectReference.Create(this);
            tcs = new TaskCompletionSource<int>();
        }

        [JSInvokable]
        public void Success() =>
            tcs.TrySetResult(0);

        [JSInvokable]
        public void Failure(string message) =>
            tcs.TrySetException(new Exception(message));

        public Task Task => tcs.Task;

        public void Dispose() => Proxy.Dispose();
    }
}

We can then create an instance of that in InitializeMidi, and pass the proxy to the JavaScript:

private async Task InitializeMidi()
{
    var handler = new PromiseHandler();
    await JSRuntime.InvokeAsync<object>("midi.initialize", TimeSpan.FromSeconds(3), handler.Proxy);
    try
    {
        await handler.Task;
        status = "Initialized";
    }
    catch (Exception e)
    {
        status = $"Initialization failed: {e.Message}";
    }
}

The JavaScript then uses the proxy object for its promise handling:

function initialize(handler) {
    success = function (midiAccess) {
        access = midiAccess;
        handler.invokeMethodAsync("Success");
    };
    failure = message => handler.invokeMethodAsync("Failure", message);
    navigator.requestMIDIAccess({ sysex: true })
        .then(success, failure);
}

It’s all quite explicit, but it seems to do the job, at least for now, and didn’t take too long to get working.

Timestamp: 11:26

Substep 4: listing MIDI ports

Listing ports doesn’t involve promises, but it does involve an iterator, and I’m dubious that I’ll be able to return that directly. Let’s create an array in JavaScript and copy ports into it:

function getInputPorts() {
    var ret = [];
    access.inputs.forEach(input => ret.push({ id: input.id, name: input.name }));
    return ret;
}

(I initially tried just pushing input into the array, but that way I didn’t end up with any data – it’s not clear to me what JSON was returned across the JS/.NET boundary, but it didn’t match what I expected.)

In .NET I then just need to declare a class to receive the data:

public class MidiPort
{
    [JsonPropertyName("id")]
    public string Id { get; set; }

    [JsonPropertyName("name")]
    public string Name { get; set; }
}

And I can get the input ports, and display them via a field that’s hooked up in the Razor page:

var inputs = await JSRuntime.InvokeAsync<List<MidiPort>>("midi.getInputPorts", Timeout);
inputDevices = string.Join(", ", inputs.Select(input => $"{input.Id} ({input.Name})"));

Success!

Listing ports in Blazor

Timestamp: 11:46 (That was surprisingly quick.)

Step 4: Retrieve the “kit 1” name in Blazor

We need two extra bits of MIDI functionality: sending and receiving data. I’m hoping that exchanging byte arrays via Blazor will be straightforward, so this should just be a matter of creating a callback and adding functions to the JavaScript to send messages and add a callback when a message is received.

Timestamp: 12:16

Okay, well it turned out that exchanging byte arrays wasn’t quite as simple as I’d hoped: I needed to base64-encode on the JS side, otherwise it was transmitted as a JSON object. Discovering that went via creating a MidiMessage class, which I might as well keep around now that I’ve got it. I can now receive messages.

Timestamp: 12:21

Blazor’s state change detection doesn’t include calls to List.Add, which is reasonable. It’s a shame it doesn’t spot ObservableCollection.Add either, though. We can fix this just by calling StateHasChanged though.

I now have a UI that can display messages. The three bits involved (as well as the simple MidiMessage class) are a callback class that delegates to an action:

public class MidiMessageHandler : IDisposable
{
    public DotNetObjectReference<MidiMessageHandler> Proxy { get; }
    private readonly Action<MidiMessage> handler;

    public MidiMessageHandler(Action<MidiMessage> handler)
    {
        Proxy = DotNetObjectReference.Create(this);
        this.handler = handler;
    }

    [JSInvokable]
    public void OnMessageReceived(MidiMessage message) => handler(message);

    public void Dispose() => Proxy.Dispose();
}

The JavaScript to use that:

function addMessageHandler(portId, handler) {
    access.inputs.get(portId).onmidimessage = function (message) {
        // We need to base64-encode the data explicitly, so let's create a new object.
        var jsonMessage = { data: window.btoa(message.data), timestamp: message.timestamp };
        handler.invokeMethodAsync("OnMessageReceived", jsonMessage);
    };
}

And then the C# code to receive the callback, and subscribe to it:

// In InitializeMidi()
var messageHandler = new MidiMessageHandler(MessageReceived);
await JSRuntime.InvokeVoidAsync("midi.addMessageHandler", Timeout, inputs[0].Id, messageHandler.Proxy);

// Separate method for the callback - we could have used a local
// method or lambda though.
private void MessageReceived(MidiMessage message)
{
    messages.Add(BitConverter.ToString(message.Data));
    // Blazor doesn't "know" that the collection has changed - even if we make it an ObservableCollection
    StateHasChanged();
}

Timestamp: 12:26

Now let’s try sending the SysEx message to request kit 1’s name… this should be the easy bit!

… except it doesn’t work. The log shows the following error:

Unhandled exception rendering component: Failed to execute ‘send’ on ‘MIDIOutput’: No function was found that matched the signature provided.

Maybe this is another base64-encoding issue. Let’s try explicitly base64-decoding the data in JavaScript…

Nope, same error. Let’s try hard-coding the data we want to send, using JavaScript that has worked before…

That does work, which suggests my window.atob() call isn’t behaving as expected.

Now I could use some logging here, but let’s try putting a breakpoint in JavaScript. I haven’t done that before. Hopefully it’ll open in the Chrome console.

Whoa! The breakpoint worked, but in Visual Studio instead. That’s amazing! I can see that atob(data) has returned a string, not an array.

This Stack Overflow question has a potential option. This is really horrible, but if it works, it works…

And it works. Well, sort of. The MIDI message I get back is much longer than I’d expected, and it’s longer than I get in JSFiddle. Maybe my callback wasn’t working properly before.

Timestamp 12:42

Okay, so btoa() isn’t what I want either. This Stack Overflow question goes into details, but the accepted answer uses a ton of code.

Hmmm… right-clicking on “wwwroot” gives me an option of “Add… Client-Side Library”. Let’s give that a go and see if it make both sides of the base64 problem simpler.

Timestamp: 12:59

Well it didn’t “just work”. The library was added to my wwwroot directory, and trying to use it from midi.js added an import statement at the start of midi.js… which then caused an error of:

Cannot use import statement outside a module

I guess I really need to know what a JavaScript module is, and whether midi.js should be one. Hmm. Time for lunch.

Saturday afternoon

Timestamp: 14:41

Back from lunch and a chat with my parents. Let’s have another look at this base64 library…

(Side note: Visual Studio, while I’m not doing anything at all and I don’t have any documents open, is taking up 80% of my CPU. That doesn’t seem right. Oh well.)

If I just try to import the byte-base64 script directly with a script tag then I end up with an error of:

Uncaught ReferenceError: exports is not defined

Bizarrely enough, the error message often refers to lib.ts, even if I’ve made sure there’s no Typescript library in wwwroot.

Okay, I’ve now got it to work, by the horrible hack of copying the file to base64.js in wwwroot, removing and removing everything about exports. I may investigate other libraries at some point, but fundamentally this inabilty to correctly base64 encode/decode has been the single most time-consuming and frustrating part so far. Sigh.

(Also, the result is something I’m not happy to put on GitHub, as it involves just a copy of the library file rather than using it as intended.)

Timestamp: 15:01

Step 5: Retrieve all kit names in Blazor

Okay, so I’ve got the not-at-all decoded kit name successfully.

Let’s try looping to get all of them, decoding as we go.

This will involve copying some of the “real” V-Drum Explorer code so I can create Data Request messages programmatically, and decode Data Set messages. While I’d love to just add a reference to VDrumExplorer.Midi, I’m definitely not there yet. (I’d need to remove the commons-midi references and replace everything I use. That’s going to be step 6, maybe…)

Timestamp: 15:41

Success! After copying quite a bit of code, everything just worked… nothing was particularly unexpected at this stage, which is deeply encouraging.

Listing TD-27 kits in Blazor

I’m going to leave it there for the day, but tomorrow I can try to change the abstraction used by V-Drum Explorer so that it can all integrate nicely…

Saturday evening

Timestamp: 17:55

Interlude: refactoring MIDI access

Okay, so it turns out I really don’t want to wiat until tomorrow. However, the next step is going to be code I genuinely want to keep, so let’s commit everything I’ve done so far to a new branch, but then go back to the branch I was on.

The aim of this step is to make the MIDI access replaceable. It doesn’t need to be “hot-replaceable” – at least not yet – so I don’t mind using a static property for “the current MIDI implementation”. I make make it more DI-friendly later on.

The two projects I’m going to change are VDrumExplorer.Model, and VDrumExplorer.Midi. Model refers to Midi at the moment, and Midi refers to the managed-midi library. The plan is to move most of the code from Midi to Model, but without any reference to managed-midi types. I’ll define a few interfaces (e.g. IMidiInput, IMidiOutput, IMidiManager) and write all the rest of the MIDI-related code to refer to those interfaces. I can then ditch VDrumExplorer.Midi, but add VDrumExplorer.Midi.ManagedMidi which will implement my Model interfaces in terms of the managed-midi library – with the hope that tomorrow I can have a Blazor implementation of the same libraries.

I have confidence that this will work reasonably well, as I’ve done the same thing for audio recording/playback devices (with an NAudio implementation project).

Let’s go for it.

Timestamp: 18:03

Okay, that went pretty much as planned. I was actually able to simplify the code a bit, which is nice. There’s potentially more refactoring to do, now that ModuleAddress, DataSegment and RolandMidiClient are in the same project – I can make RolandMidiClient.RequestDataAsync accept a ModuleAddress and return a DataSegment. That can come later though.

(Admittedly testing this found that kit 1 has an invalid value for one instrument. I’ll need to look into that later, but I don’t think it’s a new issue.)

Timestamp: 18:55

The Blazor MIDI interface implementation can wait until tomorrow – but I don’t anticipate it being tricky at all.

Sunday morning

Timestamp: 06:54

Okay, let’s do this :) My plan is:

  • Remove all the code that I copied from the rest of V-Drum Explorer into the Blazor project; we shouldn’t need that now.
  • Add a reference from the Blazor project to VDrumExplorer.Model
  • Implement the MIDI interfaces
  • Rework the code just enough to get the previous functionality working again
  • Rewrite the code to not have any hard-coded module addresses, instead detecting the right schema and listing the kits for any attached (and supported) module, not just the TD-27
  • Maybe publish it

Removing the code and adding the project reference are both trivial, of course. At that point, the code doesn’t compile, but I have a choice: I could get the code compiling again using the MIDI interfaces, but without implementing the interfaces, or I could implement the interface first.

Rewriting existing application code

Despite the order listed above, I’m going to rewrite the application part first, because that will clear the error list, making it easier to spot any mistakes while I am implementing the interface. The downside is that there’ll be bits of code I need to stash somewhere, because they’ll be part of the MIDI implementation eventually, but without wanting to get them right just yet.

I create a WebMidi folder for the implementation, and a scratchpad.txt file in to copy any “not required right now” code.

At this point I’m getting really annoyed with the syntax highlighting of the .razor file. I know it’s petty, but the grey background just for code is really ugly to me:

Ugly colours in Blazor

As I’m going to have to go through all the code anyway, let’s actually use “Add New Razor Page” this time, and move the code into there as I fix it up.

Two minutes later, it looks like what VS provides (at least with that option) isn’t quite what I want. What I really want is a partial class, not a code-behind for the model. It’s entirely possible that they’d be equivalent in this case, but the partial class is closer to what I have right now. This blog post tells me exactly what I need.

Timestamp: 07:10

Starting to actually perform the migration, I realise I need an ILogger. For the minute, I’ll use a NullLogger, but later I’ll want to implement a logger that adds to the page. (I already have a Log method, so this should be simple.)

Timestamp: 07:19

That was quicker than I’d expected. Of course, I don’t know whether or not it works.

Implementing the MIDI interfaces

Creating the WebMidiManager, WebMidiInput and WebMidiOutput classes shows me just how little I really need to do – and it’s all code I’ve written before, of course.

For the moment, I’m not going to worry about closing the MIDI connection on IMidiInput.Dispose() etc – we’ll just leave everything open once it’s opened. What I will do is use a single .NET-side event handler for each input port, and do event subscribe/remove handling on the .NET side. If I don’t manage that, the underlying V-Drum Explorer interface will end up getting callbacks on client instances after disposal, and other oddities. The outputs can just be reused though – they’re stateless, effectively.

Timestamp: 07:56

Okay, so that wasn’t too bad. No significant surprises, although there’s one bit of slight ugliness: my IMidiOutput.Send(MidiMessage) method is synchronous, but we’re calling into JavaScript interop which is always asynchronous. As it happens, that’s mostly okay: the Send message is meant to be effectively fire-and-forget anyway, but it does mean that if the call fails, we won’t spot it.

Let’s see if it actually works

Nope, not yet – initialization fails:

Cannot read property ‘inputs’ of null

Oddly, a second click of the button does initialize MIDI (although it doesn’t list the kits yet). So maybe there’s a timing thing going on here. Ah yes – I’d forgotten that for initialization, I’ve got to await the initial “start the promise” call, then await the promise handler. That’s easy enough.

Okay, so that’s fixed, but we’re still not listing the kits. While I can step through in the debugger (into the Model code), it would really help if I’d got a log implementation at this point. Let’s do that quickly.

Timestamp: 08:07

Great – I now get a nice log of how device detection is going:

  • Input device: ‘5- TD-27’
  • Output device: ‘5- TD-27’
  • Detecting devices for MIDI ports with name ‘5- TD-27’
  • No devices detected for MIDI port ‘5- TD-27’. Skipping.
  • No known modules detected. Aborting

So it looks like we’re not receiving a response to our “what devices are on this port” request.

Nothing’s obviously wrong with the code via a quick inspection – let’s add some console logging in the JavaScript side to get a clearer picture.

Hmm: “Sending message to port [object Object]” doesn’t sound promising. That should be a port ID. Ah yes, simple mistake in WebMidiOutput. This line:

runtime.InvokeVoidAsync("midi.sendMessage", runtime, message.Data);

should be

runtime.InvokeVoidAsync("midi.sendMessage", port, message.Data);

It’s amazing how often my code goes wrong as soon as I can’t lean on static typing…

Fix that, and boom, it works!

Generalizing the application code

Timestamp: 08:16

So now I can list the TD-27 kits, but it won’t list anything if I’ve got my TD-17 connected instead… and I’ve got fairly nasty code computing the module addresses to fetch. Let’s see how much easier I can make this now that I’ve got the full power of the Model project to play with…

Timestamp: 08:21

It turns out it’s really easy – but very inefficient. I don’t have any public information in the schema about which field container stores the kit name. I can load all the data for one kit at a time, and retrieve the formatted kit name for that loaded data, but that involves loading way more information than I really need.

So that’s not ideal – but it worked first time. First I listed the kits on my TD-27, and that worked as before. Turn that off and turn on the TD-17, rerun, and boom:

Listing TD-17 kits in Blazor

It even worked with my Aerophone, which I only received last week. (They’re mostly “InitTone” as the Aerophone splits user kits from preset kits, and the user kits aren’t populated to start with. The name is repeated as there’s no “kit subname” on the Aerophone, and I haven’t yet changed the code to handle that. But hey, it works…)

Listing Aerophone tones in Blazor

That’s enough for this morning, certainly. I hadn’t honestly expected the integration to go this quickly.

This afternoon I’ll investigate hosting options, and try to put the code up for others to test…

Timestamp: 08:54

After just tidying up this blog post a bit, I’ve decided I definitely want to include the code on GitHub, and publish the result online. That will mean working out what to do with the base64 library (which is at least MIT-licensed, so that shouldn’t be too bad) but this will be a fabulous thing to show in the talks I give about V-Drum Explorer. And everyone can laugh at my JavaScript, of course.

Sunday afternoon

Publishing as a web site

Timestamp: 13:12

Running dotnet publish -c Release in the Blazor directory creates output that looks like I should be able to serve it statically, which is what I’d hoped having unchecked the “ASP.NET Core Hosting” box on project creation.

One simple way of serving static content is to use Google Cloud Storage, uploading all the files to a bucket and then configuring the bucket appropriately. Let’s give it a go.

The plan is to basically follow the tutorial, but once I’ve got a simple index.html file working, upload the Blazor application. I already have HTTPS load balancing with Google Cloud, and the jonskeet.uk domain is hosted in Google Domains, so it should all be straightforward.

I won’t take you through all the steps I went through, because the tutorial does a good job of that – but the sample page is up and working, served over HTTPS with a new Google-managed SSL certificate.

Timestamp: 13:37

Time to upload the Blazor app. It’s not in a brilliant state at the moment – once this step is done I’ll want to get rid of the “counter” sample etc, but that can come later. I’m somewhat-expecting to have to edit MIME types as well, but we’ll see.

In the Google Cloud Storage browser, let’s just upload all the files – yup, it works. Admittedly it’s slightly irritating that I had to upload each of the directories separately – just uploading wwwroot would create a new wwwroot directory. I expect that using gsutil from the command line will make this easier in the future.

But then… it just worked!

Timestamp: 13:51 (the previous step only took a few minutes at the computer, but I was also chasing our cats away from the frogs they were hunting in the garden)

Tidying up the Blazor app

The point of the site is really just a single page. We don’t need the navbar etc.

Timestamp: 14:12

Okay, that looks a lot better :)

Speeding up kit name access

If folks are going to be using this though, I really want to speed up the kit loading. Let’s see how hard it is to do that – it should all be in Model code.

Timestamp: 14:20

Done! 8 minutes to implement the new functionality. (A bit less, actually, due to typing up what I was going to do.)

The point of noting that isn’t to brag – it’s to emphasize that having performed the integration with the main Model code (which I’m much more comfortable in) I can develop really quickly. Doing the same thing in either JavaScript or in the Blazor code would have been much less pleasant.

Republish

Let’s try that gsutil command I was mentioning earlier:

  • Delete everything in the Storage bucket
  • Delete the previous release build
  • Publish again with dotnet publish -c Release
  • cd bin/Release/netstandard2.1/publish
  • gsutil -m cp -r . gs://vdrumexplorer-web

The last command explained a bit more:
gustil: invoke the gsutil tool
-m: perform operations in parallel
cp: copy
-r: recursively
.: source directory
gs://vdrumexplorer-web: target bucket

Hooray – that’s much simpler than doing it through the web interface (useful as that is, in general).

Load balancer updating

My load balancer keeps on losing the configuration for the backend bucket and certificate. I strongly suspect that’s because it was created in Kubernetes engine. What I should actually do is update the k8s configuraiton and then let that flow.

Ah, it turns out that the k8s ingress doesn’t currently support a Storage Bucket backend, so I had to create a new load balancer. (While I could have served over HTTP without a load balancer, in 2020 anything without HTTPS support feels pretty ropy.)

Of course, load balancers cost money – I may not keep this up forever, just for the sake of a single demo app. But I’m sure I can afford it for a while, and it could be useful for other static sites too.

The other option is to serve the application from my k8s cluster – easy enough to do, just a matter of adding a service.

Conclusion

Okay, I’m done. This has been an amazing weekend – I’m thrilled with where I ended up. If you’ve got a suitable Roland instrument, you can try it for yourself at https://vdrumexplorer.jonskeet.uk.

The code isn’t on GitHub just yet, but I expect it to be within a week (in the normal place).

(Edited) I was initially slightly disappointed that it didn’t seem to work on my phone. I’m sure what happened when I tried innitially (and I don’t know why it’s still claiming the connection is insecure), but I’ve now managed to get the site working on my phone, connecting over Bluetooth to my TD-27. Running .NET code talking to Javascript talking MIDI over Bluetooth to list the contents of my drum module… it really just feels like it shouldn’t work. But it does.

The most annoying aspect of all of this was definitely the base64 issue… firstly that JavaScript doesn’t come with a reliable base64 implementation (for the situation I’m in, anyway) and secondly that adding a client library was rather more fraught than I’d have expected. I’m sure it’s all doable, but beyond my level of expertise.

Overall, I’ve been very impressed with Blazor, and I’ll definitely resurrect the Noda Time Blazor app for time zone conversions that I was working on a while ago.

V-Drum Explorer: Planning notes for MVVM

My current clunky “put all the code in the view” approach to V-Drum Explorer is creaking at the seams. I’m really not a WPF developer, and my understanding of MVVM is more theoretical than practical. I’ve read a reasonable amount, but quite a lot of aspects of V-Drum Explorer don’t really fit with the patterns described.

Eventually I’d like to present blog posts with a nice clean solution, and details of how it avoids all the current problems – but I’m not there yet, and pretending that good designs just drop fully formed onto the keyboard doesn’t feel healthy. So this blog post is a pretty raw copy/paste of the notes I’ve made around what I think I’ll do.

There’s no conclusion here, because life isn’t that neatly defined. I do hope I find time to tackle this reasonably soon – it’s going to be fun, but I’m not sure it’s going to be feasible to refactor in small steps.

Problems

The current design has the following issues:

  • A lot of the UI controls are created within the code-behind for the view, so styling is hard
  • The logic handling “how the UI reacts to changes” is really tricky, due to chain reactions:
  • Changing a tempo sync value changes which field is displayed next to it
  • Selecting a MultiFX option changes all the fields in the overlay
  • Selecting an instrument group changes which instruments are selected, and the vedit parameters available
  • Selecting an instrument can change the vedit parameters (e.g. cymbal size)
  • The UI currently “knows” about data segments, addresses etc – it shouldn’t need to care
  • The way that the schema is constructed makes the address logic bleed out; we can populate all the “fixed” containers and only generate fields for overlay containers when necessary
  • Using inheritance for the commonality between ModuleExplorer and KitExplorer is a bit weird

Ideas and questions that might end up being part of the answer

  • My schema has a “physical layout” and a “logical layout” – for example (in the TD-27), each kit has 24x KitPadCommon, 24x KitPadMain, 24x KitPadSub, 24x KitPadMainVEdit, 24xKitPadSubVEdit. It makes more sense to show “24x instrument info”. Is that information part of the model or the view-model?
    • I suspect it’s part of the model, effectively.
    • Perhaps the ViewModel shouldn’t even see the physical layout at all?
    • The logical layout is worryingly “tree view + details pane”-centric though
  • We have “snapshotting” in two ways when editing:
    • Overall commit/cancel
    • Remembering values for each multi-fx / vedit overlay while editing, so you can tweak, change to a different overlay accidentally, then change back to the original one and still have the tweaked values.
    • Should this logic be in the ViewModel or the Model?
  • Should the model implement INotifyPropertyChanged (et al)?
    • Looks like there’s genuine disagreement between practitioners on this
    • Feels like a pretty stark choice: either ViewModel controls everything, including how overlays work (i.e. when changing instrument, change the overlay too) or the model needs to tell the ViewModel when things are changing.
  • Where do we keep the actual data? Still in a ModuleData?
    • We could generate a model (e.g. with Kit, KitPadInst etc being full types)
    • We’d then either need to generate all the corresponding ViewModels, or access everything via reflection, which defeats half the point
    • Would definitely be easier to debug using a model
    • Generators are a lot of work…
  • Possibly reify “schema + data” in the model, to avoid having to pass the data in all over the place (hard to do binding on properties without this…)
    • Reify overlays in just-in-time fashion, to avoid creating vast numbers of fields.

Goals

  • Broadly the same UI we have now. At least the same features.
  • No code in the View – eventually. Not clear what creates a new window when we load a file etc, but we can try to remove as much as possible to start with and chip away at the rest.
  • ViewModel shouldn’t need to touch ModuleAddress. Access is view ModuleSchema, containers, ModuleData.
  • 7-bit addressing should be isolated even within the model, ideally to just ModuleAddress.
  • ViewModel interaction with fields should be simple.
  • The command line app should be able to work with the model easily – and may require lower-level access than the UI.

V-Drum Explorer: Memory and 7-bit addressing

So, just to recap, I’m writing an explorer for my Roland V-Drums set (currently a TD-17, but with a TD-27 upgrade on the way, excitingly). This involves copying configuration data from the module (the main bit of electronics involved) into the application, displaying with it, editing it, then copying it back again so that I can use the changes I’ve made. I use MIDI System Exclusive (SysEx) messages to request data from the module.

(Since I last wrote about all of this, MIDI 2.0 has been ratified, which has more sensible 2-way communication. However, I don’t think the V-Drums line supports MIDI 2.0 yet, and even if it does I expect we’ll need to wait a while for drivers, then managed APIs exposing them.)

In fact, most of the time that I’m working on the V-Drum Explorer I don’t have it connected to the drum kit: it’s much easier (and quicker) to load the data from a file. Once it’s in memory, it really doesn’t matter where the data came from. I’ll go into the file formats I’m using in another post.

This post is about how that configuration data is organized, and particularly about the 7-bit addressing it uses.

Download the docs!

If you’d like to know about all of this in more detail, it’s well worth downloading some of the reference documentation Roland very helpfully provides. The TD-17 is the simplest of the modules we’re talking about, so I’d suggest downloading the TD-17 MIDI implementation so you can go at least one level deeper than this blog post, if you’re interested. If you think you’re likely to want to do that, I’d suggest doing so before reading any further. The important bit starts on page 5 – the “Parameter Address Map” which is the bulk of the document.

Configuration as memory

The configuration data in the module isn’t stored in any kind of file system, or with separate SysEx messages for different kinds of data. Instead, it’s modeled as if the module contains a big chunk of memory, and different areas of that memory have different effects on the module. I don’t know what the implementation is like within the module itself, of course; this is just the interface presented over MIDI.

As the simplest possible example, address 0 in the memory (on all three of the modules I have documentation for) represents “the currently selected kit”. It accepts values between 0 and 99 inclusive, to represent kits 1 to 100 inclusive. (As a reminder, a kit is a configuration of all the pads, allowing you to switch the whole module between (say) a rock feel or something more electro-funky.) So as an example, if my module is currently on kit 11 (“Studio / Live room”), and I asked the module to give me the content of address 0, it would return 10. If instead I set the value of address 0 to 30, the module would display kit 31 (“More cowbell / pop”) and all the sounds would change accordingly.

The documentation describes a number of containers – that’s my own term for it, but it seems reasonable. Each container is named, and has its own description in terms of its content at different offsets. For example, address 0 belongs in the [Current] container, which is documented very simply:

[Current]
+-----------------------------------------------------+
| Offset Address | Description                        |
|----------------+------------------------------------+
|          00 00 | 0aaa aaaa | Drum Kit Number (0-99) |
|                |                             1-100  |
+----------------+------------------------------------+
| 00 00 00 01    | Total Size                         |
+----------------+------------------------------------+

The 0aaa aaaa shows that 7 bits of the value are used. Due to MIDI’s inherent 7-bit nature, each address can only store 7 bits. Whenever a larger number is required, it’s stored across multiple addresses, typically using only the bottom four bits of each 7 bit value.

The content of each container is broadly broken into three types of data – again, all the terminology is mine:

  • Primitive fields: strings, numbers etc
  • Fields which are other containers (often repeated, e.g. Kit 1 to Kit 100)
  • Overlaid containers (fields directly in this container, interpreted according to a different container)

I’ll talk about overlaid containers at length another time, as they’re tricky.

So basically you end up with a natural tree structure. So far, so good… except for 7-bit addressing.

7-bit addressing

I entirely understand why the values in memory are 7-bit values. That’s inherent in MIDI. But Roland also chose to use a 7-bit address space, which makes the code much more complex than it needs to be.

All addresses and offsets are documented using hex as if it were entirely normal – but the top bit of every byte of an address is always clear. So address 00 7F is followed directly by address 01 00 – even if they’re within the same container. Now this does mean that the values in the MIDI request messages are exactly the documented addresses: the top bit of each byte in the request message has to be clear, and that drops out naturally from this. But it makes everything else hard to reason about. I’ve been bitten multiple times by code which looks like it should be okay, but it’s either skipping some data when it shouldn’t, or it’s not skipping addresses when it should. By contrast, it would have been really simple (IMO) to document everything with a contiguous address space, and just specify that when requesting data, the address is specified in seven-bit chunks (so bits 27-21 in the first request byte, then 20-14, then 13-7, then 6-0).

I’ve tried to isolate this into a ModuleAddress struct, but the details have still leaked out into a few more places. Over the past few days I’ve tried to be more rigorous about this with a DisplayValue (7-bit) and a separate LogicalValue (contiguous), but it’s still leaking more than I want it to. I don’t think I can fix it without a more serious rewrite – which I probably want to attempt reasonably soon anyway.

You might wonder why I don’t just model everything using the logical contiguous address space, removing all the gaps entirely. The problem is that all the schema information is basically copied from the documentation – and that refers to the 7-bit addressing scheme. I really want the schema to match the documentation, so I can’t move away from that entirely. Another thing that makes it tricky is that a lot of the time I deal in offsets rather than addresses. For example, the “Kit Unit Common 1” part of a “Kit” container is always at offset 00 20 00 relative to the start of the container. That’s not too bad on its own, but I also need to express the “gap between offsets” which is a sort of offset in its own right (maybe). For example, “Kit Unit Common 2” is at offset 00 21 00 within a kit, so in the schema when I describe the “Kit Unit Common” repeated field, I describe it as having an initial offset of 00 20 00, with a gap of 00 01 00. That sounds fine – until you’ve got a repeated field which is large enough to have a gap in the middle, so you need to model that by making the offset have a gap as well. (I’m reminded of calendar arithmetic, which has similar weirdnesses.)

The lesson I’ve learned from this is that when there’s hairiness like this, it’s worth abstracting it away really thoroughly. I wish I’d stopped at the first abstraction leak and thought, “maybe I need to redesign rather than live with this”.

Conclusion

Even without 7-bit addressing, there would have been plenty of challenging choices in the design of the V-Drum Explorer, particularly in field and container representation. More details of those choices will come in future posts – but at least they feel inherently tricky… the kind of thing software engineers are expected to have to think about.

7-bit addressing feels like it was a choice made to make one small use case (MIDI request/response messages) simple, but made everything else trickier. I’d be fascinated to know the module code is a lot more complicated because of this as well, or whether so much is effectively hard-coded (because it needs to actually do stuff rather than just display data on a screen) that it doesn’t make much difference.

Next time: using data vs code to represent differences between modules.