Category Archives: DigiMixer

DigiMixer – the app

This wasn’t the post I’d expected to write, but after reading two comments in close succession on an old post when I first started playing with the X-Touch Mini I decided to spend some time effectively shuffling code around (and adding a primitive configuration dialog) so I could publish a standalone app for DigiMixer.

I want to be really clear: the app is not “supported software”. I’ll try to fix bugs if they’re reported in the GitHub repo but it’s only “best-effort in my spare time”. If you don’t need any of the functionality that’s specific to the DigiMixer app (which as far as I’m aware is basically “control via X-Touch Mini and Icon Platform surfaces”) then I’d strongly recommend using Mixing Station instead. (Mixing Station supports full and X-Touch and X-Touch Extender surfaces, but doesn’t mention the X-Touch Mini. It may just work in Mackie mode; I haven’t tried it.)

Downloading the installer

The app can be downloaded from the “releases” page on GitHub – note that that’s also where the V-Drum Explorer is published, so be careful to pick the right file. You probably want the latest DigiMixer release, and download the .msix file. Run the file, and follow the prompts – you may get asked if you trust the author, “Jonathan Skeet”. That’s up to you, of course!

Configuration

On first run, a default configuration with a single input and a single output, talking to a fake mixer abstraction, will be created. Use the “Configure / Reconfigure” menu item to configure DigiMixer to talk to your actual mixer. You’ll be presented with a dialog like this:

DigiMixer app configuration

There are basically three stages to configuration:

  • Choose the mixer hardware type and specify the IP address. There’s no autodetection facility (of either address or hardware type), I’m afraid. Use the “Test configuration” button to check that DigiMixer is able to connect.
  • Choose which channels you want DigiMixer to control. The easiest way to start this is via the “Test configuration” button – if it successfully connects to your mixer, it will find all the channels with a non-empty name, and suggest a mapping based on those. But you don’t have to accept those mappings – you can edit, reorder, add and delete channels for both inputs and outputs. This means knowing the channel number that DigiMixer would use, but for input channels and aux channels that’s generally just the same channel number shown in the supplier-provided mixer user interface. Stereo channels are automatically detected, so only add the “left” channel. The main output “left” channel is always 100.
  • If you want to enable peripherals (and if you don’t, why are you using DigiMixer?) tick the “enable peripherals” box and pick the MIDI ports that correspond to the peripherals. (If they’re not connected at the time but you know what the names will be, you can just type them in.

That’s my first stab at the configuration user interface. I know it’s not pleasant, but it’s the best I could come up with in a very limited amount of time. (The configuration file lives in %LOCALAPPDATA%\DigiMixer and is just JSON, so if you’re feeling bold you can edit it by hand.)

The app window

The user interface itself is somewhat simpler than the configuration page:

DigiMixer app

By default, DigiMixer presents each input with a set of faders (one per output). This isn’t the normal way that most mixers show inputs, but it happens to be closer to what I personally use for church. If you want to group by output instead, just toggle the radio button in the top left. When grouping by input, there’s a separate panel for “overall output fader levels” at the bottom; when grouping by output, the panel at the bottom shows the meter levels for the inputs instead (without any faders).

You can show or hide channels within each group by checking or unchecking the checkboxes next to that. The tools on the right hand side should be fairly self-explanatory, although I should point out that snapshots probably won’t survive reconfiguration (as the identity of channels can be lost; it’s too complicated to explain in this post).

If you’re using an X-Touch Mini, the first eight input channels are controlled by the knobs and the top row of buttons. The knobs change the fader level for the main output of each channel, and the buttons mute and unmute. (When the button is lit, the channel is “on”; when the button is unlit, the channel is muted.) The bottom row of buttons control channels 9-16. Note that these “first eight” and “next eight” channels are in terms of how DigiMixer is configured; they’re not necessarily channels 1-8 and 9-16 on regular mixer inputs. The main fader on the X-Touch Mini controls the main overall output volume.

Similarly, the Icon Platform M+ controls channels 1-8, and X+ controls channels 9-16.

Conclusion

It’s possible that I’ll write more documentation for the app at some point, but this was never part of the plan for DigiMixer. I’m not looking to add more features other than additional mixers (and the support for different mixers varies significantly – the X-Air and X32 support is by far the most complete), although I’ll consider feature requests, of course.

The core aim of DigiMixer is still to explore the notion of abstraction, and I still hope to get to that properly in later posts! As it happens, refactoring my code to produce the app has made me consider a different kind of abstraction… the main user interface is used in DigiMixer, At Your Service, and an At-Your-Service-adjacent app which is designed to just run in the background, using configuration from At Your Service. So while the configuration dialog shown above is brand new, most of the user interface has been working in our church setting for a long time. More on that when I get into code, no doubt.

For the moment, I hope this meets the needs of folks hoping for a quick X-Touch Mini integration.

DigiMixer: Protocols

Despite this blog series going very slowly, the DigiMixer project itself has certainly not been stalled. Over the last year, I’ve added support for various additional mixers, as well as improving the support for some of the earlier ones, and performing quite a lot of refactoring.

DigiMixer now supports the following mixers, to a greater or lesser extent:

  • Behringer X series (tested with XR16, XR18, X-32R) and Midas M series (only tested with M32R, but I expect it to be identical to the X series)
  • Harman Soundcraft Ui series (tested with Ui24R)
  • Allen & Heath Qu series (tested with Qu-SB, including the AR84 stage box)
  • Allen & Heath CQ series (tested with CQ-20B)
  • RCF M-18
  • Mackie DL series (tested with DL16S and DL32R, which proved significantly difference)
  • Yamaha DM series (tested with DM-3)
  • PreSonus StudioLive Series III (tested with 16R)

In order to support each mixer, we have to be able to communicate with it. The only sort of “standardised” protocol used by the above mixers is OSC (Open Sound Control) – and that’s still only a matter of standardising what an OSC message looks like, not what the various addresses and values mean. Some mixers support MIDI to a certain extent, sometimes even with documentation around how that support works. (Again, there’s no one standard for how MIDI integration in a mixer “should” be implemented – it’s not like MIDI on actual instruments where you can reasonably expect a given MIDI message to mean “play middle C”.) That’s useful in terms of integration within a DAW, but none of the mixers I’ve seen so far provide sufficient control via MIDI to meet DigiMixer’s needs.

This post will go into a little detail about the protocols I’ve encountered so far, what we actually need for DigiMixer, and some practical aspects of how I’ve been reverse engineering the protocols.

I’m hoping to start writing more detailed documentation about each protocol within the GitHub repo, in the Protocols directory. There’s a bit of information about the Mackie DL series at the moment, with more to come when I find time. It’s worth being aware that any terminology I use within that directory is likely to be entirely unofficial – when I talk about a message “chunk size” or “subtype” etc, that’s just what I’ve used in the code for lack of a better term.

Very high level categorizations

Let’s start with the very highest levels of categorization for the protocol: everything DigiMixer supports uses the network to communicate, and all over IP. There may well be some digital mixers where the client/mixer connection is over USB, and as I mentioned before it’s also possible to control some mixers to some extent using MIDI (which could be via a USB-MIDI connection, dedicated MIDI hardware, or even MIDI over IP) – but I haven’t investigated any mixer protocols that aren’t network-oriented.

It’s worth being really clear about the difference between the “client/mixer” protocol and any “client/control surface” protocols. In the same repository as DigiMixer, I have some libraries for integration with the Icon Platform and X-Touch Mini control surfaces – both of which are integrated with DigiMixer via an application (which currently isn’t on public GitHub, unfortunately, as it shares configuration with At Your Service). One of the purposes of the abstraction of DigiMixer is to allow mixers to be treated as broadly interchangeable – so the same DigiMixer-based code that controls (say) a CQ-20B using an X-Touch Mini should be able to control an X32 with no changes. This post ignores the control surface aspects entirely, other than in terms of what we want to be able to do with DigiMixer, focusing on the client/mixer protocols.

The most obvious initial categorization of the protocols is in terms of transport (OSI layer 4) protocol: in our case, always UDP or TCP, or a mixture.

One fairly common pattern (used by the CQ, DM, Qu, StudioLive mixers) is to have a TCP connection for control aspects, but report meter levels over UDP. Meters show the point-in-time sound level for a particular input or output; typically it doesn’t matter if a meter packet is dropped every so often – so it makes sense to use UDP for that. It’s obviously rather more important if a “mute this channel” message is dropped, so the reliability of TCP is useful there.

The RCF M-18 and the X/M series of Behringer/Midas mixers use OSC over UDP. (The DM-3 also supports OSC over UDP, but doesn’t expose enough functionality to meet DigiMixer’s requirements.) The unreliability of UDP is worrying here; presumably the expectation is that you only operate them on sufficiently reliable networks that it’s not a problem, or that clients request “current state” peridiocally from the mixer and check it for consistency with their own expected state. My experience is that on a wired network with just a single switch between the mixer and the client (which would be the common deployment scenario), it’s never actually caused a problem.

The DL and Ui series only use TCP as far as I’ve seen (or at least as far as DigiMixer is concerned). The Ui series is particularly interesting here; its manufacturer-provided user interface is just a web UI. The mixer’s built-in web server serves the user interface itself, which connects back to the mixer still on port 80 to create a web socket connection. I don’t know enough about web socket standards to know how “normal” the implementation is, but it’s very simple to code against: issue a request of “GET /raw HTTP1.1”, read the response headers, and then it’s just a line-oriented protocol. Each message within the protocol (in both directions) is an ASCII line of text. I’ll come back to message formats later on.

Sources of information

Working on DigiMixer has been a fascinating exercise in piecing together information from multiple sources. Typically the implementation of each protocol has been relatively straightforward when I’ve had enough information of the protocol itself, but that information is hard to come by.

In some cases, the manufacturer has provided the information itself, either officially or unofficially. For the Ui series for example, Harman support responded to my enquiry really quickly, sending me documentation which was, while not fully comprehensive, easily enough to get started with. (They did stress that this documentation was in no way a guarantee of future compatibility or support.)

In other cases, there’s an active community with really strong efforts, including a mixture of official and unofficial documentation. The Behringer X series and Midas M series (which are basically the same in terms of software, as far as I can tell) have lots of active projects to access them via OSC, and the most comprehensive documentation comes from Patrick-Gilles Maillot’s site.

For the StudioLive mixers, there’s a GitHub project and documentation which are strictly unofficial and still at least somewhat incomplete – but invaluable. The situation is similar for the RCF M-18, where a single inactive GitHub repo is basically I could find.

For other mixers… there’s Wireshark. All the digital mixers I’ve looked at have manufacturer-supplied clients. When those run on Windows, it’s easy to just start Wireshark, open the client and (say) move a fader, then close the client and look at the traffic between the mixer and the client. Things are slightly more fiddly if the only client provided is an Android or iOS app, but I’ve found the TP-Link TL-SG105E to be really handy – it’s a small, silent, managed switch which supports port mirroring. So all I need to do is plug both my laptop and the mixer into the switch, mirror traffic from the mixer port to the laptop port, and again run Wireshark.

Mixing Station supports all of these mixers too, and sometimes it’s useful to look at the traffic between that and the mixer and compare it with the traffic between the manufacturer-supplied client and the mixer.

Of course, capturing the traffic between the mixer and the client doesn’t generally explain that traffic at all. We don’t need to understand all the traffic though – only enough for DigiMixer to be effective. So what does that consist of?

DigiMixer requirements for protocol comprehension

As I’ve said before, DigiMixer doesn’t try to be a full-fidelity mixer client. It only aims to provide control in terms of muting and unmuting, and moving faders (for either “overall output” or an “input/output combination” so “aux 1 level” or “input 2 level to aux 3” for example). Additionally, it attempts to provide information about channel names, general mixer information, any channels that are linked together to form stereo pairs, and meter information.

In protocol terms, that normally means we need to understand:

  • Initial connection requirements, including any “client handshake”. (For mixer TCP + UDP protocols, this handshake over TCP sometimes involves each side telling the other which UDP port they’re listening on.)
  • How to fetch mixer information (model, version, user-specified mixer name)
  • How to fetch the initial state of the mixer (channel names, any stereo links, and current fader/mute status)
  • How to send “mute/unmute this channel” and “move this fader” commands
  • What the mixer sends to the client if state is changed by another client
  • What the mixer sends to the client to report meter levels (potentially including how the client requests these in the first place)

Some protocols make those requirements very easy to fulfil – others are significantly more challenging.

Protocol layers and steps in reverse-engineering a protocol

I’ve never fully understood the OSI model, in terms of being able to clearly place any specific bit of a protocol into one of the seven layers. However, the idea of layering in general has been very useful within DigiMixer. Most of the implementations for mixers are implemented as two projects, one with a “core” suffix and one without, e.g. DigiMixer.Mackie.Core and DigiMixer.Mackie. The “core” project in each case is focused around what I expect would be the presentation layer (and sometimes the session layer) in OSI; I think of it in terms of message framing then message decomposition. (I believe that I’m using message framing in a perfectly standard way here. There’s probably a better name for message decomposition.)

All of the protocols used by DigiMixer have the idea of a message:

  • TCP connections form a bidirectional stream of messages
  • Each UDP connection forms a unidirectional stream of messages

(In some protocols the mixer uses UDP connections bidirectionally too – basically sending packets to whichever UDP port was used to send packets to it. In other protocols the two UDP streams are entirely separate.)

Message framing

With the UDP protocols I’ve seen implemented when working on DigiMixer, each UDP packet corresponds exactly to one message. There are never UDP packets which contain multiple messages, and a message never needs to be split across multiple packets.

With TCP, however, it’s a different story. Wireshark allows you to follow a TCP stream, showing the flow of data in each direction, but it takes a bit of work to figure out how to split each of those streams into messages.

Here’s part of the traffic I see in Wireshark when opening the DM-3 MixPad app in Windows, for example.

00000000  4d 50 52 4f 00 00 00 1d  11 00 00 00 18 01 01 01   MPRO.... ........
00000010  02 31 00 00 00 09 50 72  6f 70 65 72 74 79 00 11   .1....Pr operty..
00000020  00 00 00 01 80                                     .....
00000025  4d 50 52 4f 00 00 00 47  11 00 00 00 42 01 10 01   MPRO...G ....B...
00000035  04 11 00 00 00 01 00 31  00 00 00 09 50 72 6f 70   .......1 ....Prop
00000045  65 72 74 79 00 11 00 00  00 10 3a 7c 8d 4c 85 f8   erty.... ..:|.L..
00000055  9f 1e aa 83 4f 96 63 0c  ec 3d 11 00 00 00 10 8b   ....O.c. .=......
00000065  76 f3 98 78 64 6e 83 15  f5 81 7c 06 cc b6 91 4d   v..xdn.. ..|....M
00000075  50 52 4f 00 00 00 09 11  00 00 00 04 01 04 01 00   PRO..... ........
    00000000  4d 50 52 4f 00 00 00 47  11 00 00 00 42 01 10 01   MPRO...G ....B...
    00000010  04 11 00 00 00 01 00 31  00 00 00 09 50 72 6f 70   .......1 ....Prop
    00000020  65 72 74 79 00 11 00 00  00 10 3a 7c 8d 4c 85 f8   erty.... ..:|.L..
    00000030  9f 1e aa 83 4f 96 63 0c  ec 3d 11 00 00 00 10 87   ....O.c. .=......
    00000040  49 a1 3e 61 58 ea ce dc  00 0a cb 7d a1 dd cb      I.>aX... ...}...
    0000004F  4d 50 52 4f 00 00 00 09  11 00 00 00 04 01 04 01   MPRO.... ........
    0000005F  00 4d 50 52 4f 00 00 08  c3 11 00 00 08 be 01 14   .MPRO... ........

I suspect that the line break after the third line (between bytes 00000024 and 00000025 outbound) is due to a packet boundary, but it’s also possible that Wireshark is doing a little bit more than that, e.g. only showing a line break between packets if the gap between them (in terms of time) is above some threshold. I’ve generally ignored that, whereas “conversations” of short messages tend to make message boundaries fairly clear.

In this case, the repeated “MPRO” text at least appears at first glance to indicate the start of a message. The four bytes after that “MPRO” then seem to show (in big-endian order) the length of the remainder of the message.

In other words, after looking at a reasonable amount of data like the dump above, I was able to guess that the DM3 protocol had a message framing of:

  • 4 bytes: Message type (e.g. “MPRO”, “EEVT”, “MMIX”)
  • 4 bytes: Message body length, big-endian
  • Message body

A message framing hypothesis like that is reasonably easy to test, particularly after writing a bit of code to parse the text format of a Wireshark hex dump like the above. (My experience is that the text format is generally easier than having to deal with than the full pcapng files that Wireshark deals with by default. The amount of manual work required to follow the TCP stream and then save that as text is pretty small.)

Most of the protocols I’ve worked with have had some sort of “message header, message body” format, where the header includes information about the length of the body. There are some differences though:

  • Sometimes there’s some additional state (e.g. a “message counter byte”)
  • Sometimes the message header has no information other than framing, unlike the example above, where you really still need to keep the “MPRO” part as the “message type” – not that we know what “type” really means yet
  • Sometimes there’s a trailer (e.g. a checksum)
  • Sometimes the length information in the header is message length rather than bdoy length (i.e. the length can include or exclude the header itself, depending on the protocol)

In the case of the Ui series, the framing is just based on line breaks instead. These two schemes – “message delimiters” (line breaks) or “headers with length information” – are the main approaches to message framing that I’ve seen, not just in DigiMixer but over the course of my career. (It’s not clear whether you’d include the “single message per UDP packet” or “close the TCP connection after each message” message framing schemes, or just approaches that mean you don’t need message framing at all.) Some protocols have a mixture of the two: HTTP/1.1 uses delimiters for headers, then specifies a content length within one of the headers to allow further requests or responses to be sent on the same connection.

Once I’ve validated a message framing hypothesis in quick-and-dirty code (typically in another project with a Tools suffix, e.g. DigiMixer.Mackie.Tools) I’ll then add that framing into the “core” project, in the form of a message type implementing IMixerMessage:

public interface IMixerMessage<TSelf> where TSelf : class, IMixerMessage<TSelf>
{
    static abstract TSelf? TryParse(ReadOnlySpan<byte> data);
    int Length { get; }
    void CopyTo(Span<byte> buffer);
}

The addition of this interface into DigiMixer was a relatively new feature, as I was waiting for .NET 8 to land. It was predated by the concept of a “message processor” which effectively converts a stream of bytes into a stream of messages in a suitable form for consumption, but prior to the interface with its fun static abstract TryParse method, I had to specify various aspects of the message separately. Between the message interface, the message processor, and a couple of base classes, I now hardly have any code dealing with TcpClient and UdpClient directly. Lovely. (There are now multiple derived classes with hardly any behaviour, and I might refactor those at some point, but at least the logic isn’t repeated.)

Message decomposition

Confirming that I’ve understood the message framing for a protocol is immensely satisfying, and a necessary first step – but it also tends to be the simplest step. It’s often feasible to understand the message framing without understanding the whole of the message header, just as in the above example we don’t know what the different message types mean, or even how many message types there are. More importantly, even if we completely understand the framing, it doesn’t tell us anything about the meaning of those messages. They’re just blobs of data.

Message decomposition goes slightly further, taking the message body apart in terms of its constituent parts – potentially still without understanding the actual meaning of any values.

To take the DM-3 example shown above a bit further, it turns out that every message body that I’ve seen consists of:

  • Byte 0x11
  • 4 bytes, again a big-endian integer representing a length
  • That length in terms of bytes, as the “real” body

That’s just an extra (and redundant as far as I can tell) layer of wrapping, but within the “real” body we have a 4-byte set of flags (which do have some pattern to them, but I haven’t fully figured out) then a sequence of useful data in segments. Each segments consists of:

  • A type byte (always 0x11, 0x12, 0x14, 0x24 or 0x31 as far as I’ve seen); more on this below
  • The number of “units” being represented
  • The units themselves

The type byte consists of two nybbles – the first is the “kind” of units (1 for unsigned integers, including bytes; 2 for signed integers; 3 for characters) and then a second for “the number of bytes per unit”. So 0x11 is just a sequence of bytes, 0x12 is “a sequence of UInt16 values”, 0x14 is “a sequence of UInt32 values”, 0x24 is “a sequence of Int32 values, and 0x31 is basically “a string” (which is null-terminated despite also having a length, but hey).

The segments occur one after another, until the end of the “real” body.

So for the piece of hex shown earlier from the DM-3 (including a large message that was truncated), we can decompose those messages into:

=> MPRO: Flags=01010102; Segments=2
  Text: 'Property'
  Binary[1]: 80

=> MPRO: Flags=01100104; Segments=4
  Binary[1]: 00
  Text: 'Property'
  Binary[16]: 3A 7C 8D 4C 85 F8 9F 1E AA 83 4F 96 63 0C EC 3D
  Binary[16]: 8B 76 F3 98 78 64 6E 83 15 F5 81 7C 06 CC B6 91

=> MPRO: Flags=01040100; Segments=0

<= MPRO: Flags=01100104; Segments=4
  Binary[1]: 00
  Text: 'Property'
  Binary[16]: 3A 7C 8D 4C 85 F8 9F 1E AA 83 4F 96 63 0C EC 3D
  Binary[16]: 87 49 A1 3E 61 58 EA CE DC 00 0A CB 7D A1 DD CB

<= MPRO: Flags=01040100; Segments=0

<= MPRO: Flags=01140109; Segments=9
  Binary[1]: 00
  Text: 'Property'
  Text: 'Property'
  UInt16[*1]: 0000
  UInt32[*0]:
  UInt32[*0]:
  UInt32[*1]: 000000f0
  Binary[2164]: 4D 4D 53 58 4C 49 54 00 50 72 6F 70 65 72 74 79 [...]
  Binary[0]:

(My formatting is somewhat inconsistent here – I should probably get rid of the “*” in the lengths for UInt16/UInt32/Int32, but it doesn’t actually hurt the readability much… and this is just the output of a fairly quick-and-dirty tool.)

There’s still no application-level information here, but we can see the structure of the traffic – which makes it much, much easier to then discern bits of the application level protocol.

Application level protocol

Nothing I’ve described so far is mixer-specific. At some point, some combination of message type, flags and values has to actually mean something. Maybe it’s “please send me the version information about the mixer” or “this is the meter levels for the inputs” or “please mute the connection from input channel 1 to output channel 5”.

The process of reverse engineering the application level protocol involves both inspiration and perspiration – usually in that order, and only after working out at least a large proportion of the message framing and message decomposition. You don’t need to know everything, but you do need to know “if I move a fader on the mixer with a different client, I get a message back looking something like X.” That takes experimentation and some leaps of faith. But then you need to carefully document “well, what’s the difference between moving the fader for input 1, output 1 or moving the fader for input 2, output 5, or just an output fader?” – and “what’s the difference between moving the fader from the bottom of its range to a bit higher, and moving it a bit higher still?” That’s somewhat tedious, but still surprisingly rewarding work – so long as you can pay enough attention to transcribe your results to a log carefully enough.

I’m not going to attempt to describe (here) what the various protocols look like at an application level, because they vary so much (even if the lower abstraction levels are reasonably similar) – and because there’s so much I still don’t understand about them. Once I’ve written up some details, they’ll be on GitHub. But understanding how those abstraction levels work at all that’s been really interesting to me – and which I suspect will find useful in entirely different scenarios.

What’s next?

I think after diving into some of the slightly lower level bits of DigiMixer, the next post should probably be at a very high level, and back towards the goal of the whole blog series: abstraction. Assuming I don’t get distracted by something else to write about, I’ll try to make the next post as simple as “what do mixers have in common, and where do they differ, within the scope of DigiMixer?” After that, maybe the following post will be about what that abstraction looks like in code, and some of the trade-offs I’ve made along the way.

SSC Protocol

I’m aware that I haven’t been writing as many blog posts as I’d hoped to about DigiMixer. I expect the next big post to be a comparison of the various protocols that DigiMixer supports. (I’ve started a protocols directory in the GitHub repo, but there isn’t much there yet.) In the meantime, I wanted to mention a protocol that I just recently integrated… SSC.

SSC stands for “Sennheiser Sound Control” – it’s based on OSC (Open Sound Control), the binary protocol that I already use for controlling Behringer mixers and the RCF M-18. SSC is very similar to OSC in terms of its structure of path-like addresses to refer to values (e.g. “/device/identity/product”) but uses JSON as the representation. The addresses are represented via nested objects: to request a value you specify the null literal in the request, whereas to set a value you specify the new value. So for example, a request for a device’s product name, serial number and current time might look like this:

{
  "device": {
    "time": null,
    "identity" {
      "serial": null,
      "product": null
    }
  }
}

Not only is this nice and easy to work with, but it’s all documented. For example, the specification for the EW-DX EM 2 radio microphone receiver (which is the device I have) can be downloaded here. I can’t tell you how delightful it is to read a well-written specification before starting to write any integration code. There are a few aspects that it doesn’t cover in as much detail as I’d like (e.g. errors) but overall, it’s a joy.

Obviously a radio microphone receiver isn’t actually a mixer – while I could sort of squint and pretend it is (it’s got mutes and sound levels, after all) I haven’t done that… this is really for integration into At Your Service, so that I can alert the operator if microphone battery levels are running low. Given the relationship between OSC and SSC, however, it made sense to include it in the DigiMixer code base – even with tests for the abstraction I’ve created over the top. (No, having tests isn’t normally noteworthy – but my integration projects normally don’t include many tests as the big “unknown” is more what the device does rather than how the code behaves.)

Due to a combination of existing code in DigiMixer for handling “establish a client/server-like relationship over UDP”, the clear documentation, and my previous experience with OSC, I was able to get my new radio mic receiver integrated into At Your Service within a few hours. I’m sure I’ll want to tweak it over time – but overall, I’m really pleased at how easy it was to add this. I don’t expect to actually display the details to most users, but here they are for diagnostic purposes:

SSC details screenshot

And the status bar with just battery levels:

Battery levels screenshot

Onward and upward!

DigiMixer: Introduction to digital mixers

While I’m expecting this blog post series to cover a number of topics, the primary purpose is as a vehicle for discussing abstraction and what it can look like in real-world projects instead of the “toy” examples that are often shown in books and articles. While the DigiMixer project itself is still in some senses a toy project, I do intend to eventually include it within At Your Service (my church A/V system) and my aim is to examine the real problems that come with introducing abstraction.

In this post, I’ll cover the very basics of what we’re trying to achieve with DigiMixer: the most fundamental requirements of the project, along with the highest-level description on what a digital audio mixer can do (and some terminology around control surfaces). Each of the aspects described here will probably end up with a separate post going into far more detail, particularly highlighting the differences between different physical mixers.

Brief interlude: Mixing Station

When I wrote the introductory DigiMixer blog post I was unaware of any other projects attempting to provide a unified software user inferface to control multiple digital mixers. I then learned of Mixing Station – which does exactly that, in a cross-platform way.

I’ve been in touch with the author, who has been very helpful in terms of some of the protocol details, but is restricted in terms of what he can reveal due to NDAs. I haven’t yet explored the app in much depth, but it certainly seems comprehensive.

DigiMixer is in no way an attempt to compete with Mixing Station. The goal of DigiMixer is primarily education, with integration into At Your Service as a bonus. Mixing Station doesn’t really fit into either of those goals – and DigiMixer is unlikely to ever be polished enough to be a viable alternative for potential Mixing Station customers. If this blog post series whets your appetite for digital audio mixers, please look into Mixing Station as a control option.

What is a digital audio mixer?

I need to emphasize at this stage that I’m very much not an audio engineer. While I’ll try to use the right terminology as best I can, I may well make mistakes. Corrections in comments are welcome, and I’ll fix things where I can.

A digital audio mixer (or digital mixer for short from here onwards – if I ever need to refer to any kind of mixer other than an audio mixer, I’ll do so explicitly) is a hardware device which accepts a number of audio inputs, provides some processing capabilities, and then produces a number of audio outputs.

The “digital” aspect is about the audio processing side of things. There are digital mixers where every aspect of human/mixer interaction is still analogue via a physical control surface (described in more detail below). Many other digital mixers support a mixture of physical interaction and remote digital control (typically connected via USB or a network, with applications on a computer, tablet or phone). Some have almost no physical controls at all, relying on remote control for pretty much everything. This latter category is the one I’m most familiar with: my mixers are all installed in a rack, as shown below.

Rack containing digital mixers

My shed mixer rack, December 2022 – the gap in the middle is awaiting an Allen and Heath Qu-SB, on back-order.

The only mixer in the rack that provides significant physical control is the Behringer X-32 Rack, just below the network switch in the bottom rack. It has a central screen with buttons and knobs round the side – but even in this case, you wouldn’t want to use those controls much in a live situation. They’re more for set-up activities, in my view.

Most of the other mixers just have knobs for adjusting head-phone output and potentially main output. Everything else is controlled via the network or USB.

Control surfaces

Even though DigiMixer doesn’t have any physical controls (yet), the vocabulary I’ll use when describing it is intended to be consistent with that of physical control surfaces. Aside from the normal benefits of consistency and familiarity, this will help if and when I allow DigiMixer to integrate with dedicated control surfaces such as the X-Touch Mini, Monogram or Icon Platform M+.

Before getting into mixers, I wasn’t even aware of the term control surface but it appears to be ubiquitous – and useful to know when researching and shopping. I believe it’s also used for aircraft controls (presumably including flight simulators) and submarines.

While mixers often have control surfaces as part of the hardware, dedicated control surfaces (such as the ones listed above) are also available, primarily for integration with Digital Audio Workstations (DAWs) used for music recording and production. Personally I’ve always found DAWs to be utterly baffling, but I’m certainly not the target audience. (If I’d understood them well in 2020, they could potentially have saved me a lot of time when editing multiple tracks for the Tilehurst Methodist Church virtual choir items, but Audacity)

Faders

Faders are the physical equivalent to slider controls in software: linear controls which move along a fixed track. These are typically used to control volume/gain.

When you get past budget products, many control surfaces have motorised faders. These are effectively two-way controls: you can move them with your fingers to change the logical value, or if the logical value is changed in some other way, e.g. via a DAW, the fader will physically move to reflect that.

Faders generally do exactly what they say on the tin – and are surprisingly satisfying to use.

Buttons

For what sounds like an utterly trivial aspect of control, there are a few things to consider when it comes to physical buttons.

The first is whether they’re designed for state or for transition. The controls around the screen of the X-32 Rack mixer demonstrate this well:

There’s a set of four buttons (up/down/left/right) used to navigate within the user interface:

Plain navigation buttons

There are buttons to the side of the screen which control and indicate which “page” of the user interface is active:

Lit navigation buttons

There are on/off buttons such as for toggling muting, solo, and talkback. (I’ll talk more about those features later on… hopefully muting is at least reasonably straightforward.)

Lit toggle buttons

Secondly, a state-oriented button may act in a latching or momentary manner. A latching button toggles each time you press it: press it once to turn it on (whatever that means for the particular button), press it again to turn it off. A momentary button is only “on” while you’re pressing it. (This is also known as “push-to-talk” in some scenarios.) In some cases the same button can be configured to be “sometimes latching, sometimes momentary” – which can cause confusion if you’re not careful.

The most common use case for buttons on a mixer is for muting. On purely-physical mixers, mute buttons are usually toggle buttons where the state is indicated by whether the button is physically depressed or not (“in” or “out”). On the digital mixers I’ve used, most buttons (definitely including mutes) are semi-transparent rubberised buttons which are backlit – using light to represent state is much clearer at-a-glance than physical position. Where multiple buttons are placed close together, some control surfaces use different light colours to differentiate between them. I’ve seen just a few cases where a single physical button uses different light colours to give even more information.

Rotary encoders, aka knobs

While I’ve been trying to modify my informal use of terminology to be consistent with industry standards, I do find it hard to use “rotary encoder” for what everyone else I know would just call a knob. I suspect the reasons for the more convoluted term are a) to avoid sexual connotations; b) to sound more fancy.

Like faders, knobs are effectively continous controls (as opposed to the usually-binary nature of buttons) – it’s just that the movement is rotational instead of linear.

On older mixers, knobs are often limited in terms of the minimum and maximum rotation, with a line on the knob to indicate the position. This style is still used for some knobs on modern control surfaces, but others can be turned infinitely in either direction, reporting changes to the relevant software incrementally rather than in terms of absolute position. Lighting either inside the knob itself or around it is often used to provide information about the logical “position” of the knob in this case.

Lit volume knob

Some knobs also act as buttons, although I personally find pushing-and-twisting to be quite awkward, physically.

Jog wheel / shuttle dial

I haven’t actually seen jog wheels on physical mixers, but they’re frequently present on separate control surfaces, typically for use with DAWs. They’re large rotational wheels (significantly larger than knobs); some spring back to a central position after being released, whereas others are more passive. In DAWs they’re often used for time control, scrolling backward and forward through pieces of audio.

I mention jog wheels only as a matter of completeness; they’re not part of the abstraction I need to represent in DigiMixer.

Meters

Meters aren’t really controls as such, but they’re a crucial part of the humn/machine interface on mixers. They’re used to represent amounts of signal at some stage of processing (e.g. the input for a microphone channel, or the output going to a speaker). In older mixers a meter might consist of several small lights in a vertical line, where a higher level of signal leads to a larger number of lights being lit (starting at the bottom). Sometimes meters are a single color (and if so, it’s usually green); other meters go from mostly green to yellow near the top to red at the very top to warn the user when the signal is clipping.

Meters sometimes have a peak indicator, showing the maximum signal level over some short-ish period of time (a second or so).

How are digital mixers used?

This is where I’m on particularly shaky ground. My primary use case for a mixer is in church, and that sort of “live” setup can probably be lumped in with bands doing live gigs (using their own mixers), along with pubs and bars with occasional live sound requirements (where the pub/bar owns and operates the equipment, with guest talent or maybe just someone announcing quiz questions etc). Here, the audio output is heard live, so the mixing needs to be “right” in the moment.

Separately, mixers are used in studio setups for recording music, whether that’s a professional recording studio for bands etc or home use. This use case is much more likely to use a DAW afterwards for polishing – so a lot of the task is simply to get each audio track recorded separately with as little interference as possible. A mixer can be used as a way of then doing the post-processing (equalizing, compression, filters, effects etc); I don’t know enough about the field to know whether that’s common or whether it’s usually just done in software on a regular computer.

Focusing on the first scenario, there are two distinct phases:

  • Configuring the mixer as far as possible beforehand
  • Making adjustments on-the-fly in response to what’s happening in the room

The on-the-fly adjustments (at least for a rank amateur such as myself) are:

  • Muting and unmuting individual input channels
  • Adjusting the volume of individual input/output combinations (e.g. turning up one microphone’s output for the portion of our church congregation on Zoom, while leaving it alone for the in-building congregation)
  • Adjusting the overall output volumes separately

What is DigiMixer going to support?

Selfishly, DigiMixer is going to support my use case, and very little else. Even within “stuff I do”, I’m not aiming to support the first phase where the mixer is configured. This doesn’t need any integration into At Your Service – if multiple churches each with their own mixer each have a different mixer model, that’s fine… the relevant tech person at the church can set the mixer up with the app that comes with the mixer. If they want to add some reverb, or add a “stereo to mono” effect (which we have at Tilehurst Methodist Church) or whatever, that doesn’t need to be part of what’s controlled in the “live” second phase.

This vastly reduces the level of detail in the abstraction. I’ve gone into a bit more detail in the section below to give more of an idea of the amount of work I’m avoiding, but what we do need in DigiMixer is:

  • Whether the mixer is currently connected
  • Input and output channel configuration (how many, names, mono vs stereo)
  • Muting for inputs and outputs
  • Meters for inputs and outputs
  • Faders for input/output combinations
  • Faders for overall outputs

What is DigiMixer not going to support?

I have a little experience in trying to do “full fidelity” (or close-to full fidelity) companion apps – my V-Drum Explorer app attempts to enable every aspect of the drum kit to be configured, which requires knowledge of every aspect of the data model. In the case of Roland V-Drums, there’s often quite a lot of documentation which really helps… I haven’t seen any digital mixers with that level of official documentation. (The X32 has some great unofficial documentation thanks to Patrick-Gilles Maillot, but it’s still not quite the same.)

Digital mixers have a lot of settings to consider beyond what DigiMixer represents. It’s worth running through them briefly just to get more of an idea of the functionality that digital mixers provide.

Channel input settings

Each input channel has multiple settings, which can depend on the input source (analog, USB, network etc). Common settings for analog channels are:

  • Gain: the amount of pre-amp gain to apply to the input before any other signal processing. This is entirely separate from the input channel’s fader. (As a side-note, the number of places you effectively control the volume of a signal as it makes its way through the system can get a little silly.)
  • Phantom power: whether the mixer should provide 48v phantom power to the physical input. This is usually used to power condenser microphones.
  • Polarity: whether to invert the phase of the signal
  • Delay: a customizable delay to the input, used to synchronize sound from sources with different natural delays

“Standard” signal processing

Most mixers allow very common signal processing to apply to each input channel individually:

  • A gate reduces noise by effectively muting a channel completely when the signal is below a certain threshold – but with significantly more subtlety. A gate typically has threshold, attack, release and hold parameters.
  • A compressor reduces the dynamic range of sound, boosting quiet sounds and taming loud ones. (I find it interesting that this is in direct contrast to high dynamic range features in video processing, where you want to maximize the range.)
  • An equalizer adjusts the volume of different frequency bands.

Effects (FX) processing

Digital mixers generally provide a fixed set of FX “slots”, allowing the user to choose effects such as reverb, chorus, flanger, de-esser, additional equalization and others. A single mixer can offer many, many effects (multiple reverbs, multiple choruss etc).

Not only does each effect option have its own parameters, but there are multiple ways of applying the effect, via side-chaining or as an insert. Frankly, it gets complicated really quickly – multiple input channels can send varying amounts of signal to an FX channel, which processes the combination and then contributes to regular outputs (again, by potentially varying amounts).

I’m sure it all makes sense really, but as a novice audio user it makes my head hurt. Fortunately I haven’t had to do much with effects so far.

Routing

Routing refers to how different signals are routed through the mixer. In a very simple mixer without any routing options, you might have (say) 4 input sockets and 2 output sockets. Adjusting “input 1” (e.g. with the first fader) would always adjust how the sound coming through the first input socket is processed. In digital mixers, things tend to get much more complicated, really quickly.

Let’s take my X32 Rack for example. It has:

  • 16 XLR input sockets for the 16 regular “local” inputs
  • 6 aux inputs (1/4″ jack and RCA)
  • A talkback input socket
  • A USB socket used for media files (both to play and record)
  • 8 XLR main output sockets
  • 6 aux outputs (1/4″ jack and RCA)
  • A headphone socket
  • Two AES50 ethernet sockets for audio-over-ethernet, each of which can have up to 48 inputs and 48 outputs. (The X32 can’t handle quite that many inputs and outputs, but it can work with AES50 devices which do, and address channels 1-48 on them.)
  • An ultranet monitoring ethernet socket (proprietary audio-over-ethernet to Behringer monitors)
  • A “card” which supports different options – I have the USB audio interface card, but other options are available.

(These are just the sockets for audio; there are additional ethernet and MIDI sockets for control.)

How should this vast set of inputs be mapped to the 32 (+8 FX) usable input channels? How should 16 output channels be mapped to the vast set of outputs? It’s worth noting that there’s an asymmetry here: it doesn’t make sense to have multiple configured sources for a single input channel, but it does make sense to send the same output (e.g. “output channel 1”) to multiple physical devices.

As an example, in my setup:

  • Input channels 1-16 map to the 16 local XLR input sockets on the rack
  • Input channels 17-24 map to input channels 1-8 on the first AES50 port, which is connected to a Behringer SD8 stage box (8 inputs, 8 outputs)
  • Input channels 25-32 map to channels 1-8 via the USB port
  • Output channels 1-8 map to the local output XLR sockets and to the first AES50 port’s outputs 1-8 and to channels 9-16 via the USB port
  • Output channels 9-16 map to channels 1-8 via the USB port (yes, that sounds a little backwards, but it happens to simplify using the microphones)
  • The input channels 1-8 from the first AES50 port are also mapped to output channels 17-24 on the USB port
  • The output channels 1-8 on the USB port are also mapped to input channels 25-32 on the USB port.

Oh, and there are other options like having an oscillator temporarily take over an output port. This is usually used for testing hardware connections, although I’ve used this for reverse engineering protocols – a steady, adjustable output is really useful. Then there are options for where talkback should go, how the aux inputs and outputs are used, and a whole section for “user in” and “user out” which I don’t understand at all.

All of this is tremendously powerful and flexible – but somewhat overwhelming to start with, and the details are different for every mixer.

General settings

Each digital mixer has its own range of settings, such as:

  • The name of the mixer (so you can tell which is which if you have multiple mixers)
  • Network settings
  • Sample rates
  • MIDI settings
  • Link preferences (for stereo linked channels)
  • User interface preferences

That’s just a small sample of what’s available in the X32 – there are hundreds of settings, many cryptically described (at least to a newcomer), and radically different across mixers.

Conclusion

When I started writing this blog post, I intended it to mostly focus on the abstraction I’ll be implementing in DigiMixer… but it sort of took on a life of its own as I started describing different aspects of digital mixers.

In some ways, that’s a good example of why abstractions are required. If I tried to describe everything about even one of the mixers I’ve got, that would be a very long post indeed. An abstraction aims to move away from the detail, to focus on the fundamental aspects that all the mixers have in common.

This series of blog posts won’t be entirely about abstractions, even though that’s the primary aim. I’ll go into some comparisons of the network protocols supported by the various mixers, and particular coding patterns too.

There’s already quite a bit of DigiMixer code in my democode repository – although it’s in varying states of production readiness, let’s say. I expect to tidy it up significantly over time.

I’m not sure what I’ll write about next in terms of DigiMixer, but I hope the project be as interesting to read about as it’s proving to explore and write about.

Introduction to DigiMixer

This is the first of what I expect to become a series of maybe a dozen blog posts about a hobby project I’ve started, called DigiMixer.

Back in January 2021 I posted about controlling an XR-16 using Open Sound Control, and then later using an X-Touch Mini to control the XR-16 using the same underlying code.

Since then, this has become part of my church A/V project, At Your Service, which I’ve also mentioned in blog posts about VISCA cameras and MAUI. At Your Service (AYS) has been used “in production” (i.e. for real Sunday services) for about a year and a half now, and the code to control the XR-18 (which is an XR-16 plus USB audio interface, effectively) is absolutely crucial to this. Fortunately, it’s proved pretty stable.

I don’t currently expect AYS to be used in any church other than my local one (Tilehurst Methodist Church), but I’d like to at least work on making it a little more feasible for that to happen – particularly if I can have fun with more coding experience at the same time. To that end, I’ve started looking at other digital mixers that are similar to the XR-16. These are audio mixers which all look pretty similar: they have XLR sockets for inputs and outputs, possibly some headphone sockets and volume control for those, usually a USB connection so it can act as an audio interface, and a network connection to control it. Some have additional network connections for network-based audio expansions and the like, but I’m not (currently) interested in that aspect.

The part that makes these mixers different to “regular” audio mixers is what they lack: faders, EQ adjusters, mute buttons etc. That’s all done via network control. There are some mixers that can be controlled over the network as well as physically, but I haven’t investigated those.

Each of these mixers from different manufacturers is controlled in a different way, and they all have different features and limitations. However, they have some core functionality in common, and that’s probably enough commonality for use in a church service. The aim of the DigiMixer project is to create a lowest-common-denominator abstraction allowing an application such as AYS (and potentially multiple sample standalone DigiMixer applications) to control any of these mixers without having to “know” about anything other than the abstraction.

There’s nothing particularly new about the abstraction concept here, but this use case happens to tie into something I really want to do anyway, and I believe it will provide plenty of material for blog posts on applying abstraction in C#, in the real world. Most articles on abstraction are theoretical, for perfectly valid reasons – but that means they gloss over the kind of issue you face when trying to apply the ideas for real. I suspect most developers have encountered this sort of thing, but I don’t have any deadlines for DigiMixer, and I can share everything without worrying about confidential material etc.

This first post is nothing but background material, partly as I’m waiting for one mixer to arrive, and some more information about others. The rest of this post is just a list of the mixers I either have access to, have on order, or which I’d like to get to work if possible. If you know of any others (particularly budget-friendly ones with good documentation!), please leave a comment.

Behringer X-Air series (XR-12, XR-16, XR-18, X-32)

XR-18 photo

This is where I started, and the mixer series I know best. We use an XR-18 at church and I have one in my shed as my “main mixer”. It’s controlled via Open Sound Control – with a few customizations. There’s a reasonable amount of documentation, albeit scattered across the web and mostly aimed at the (higher end) X-32. The Unofficial X32/M32 OSC Protocol document by Patrickā€Gilles Maillot is probably the most helpful.

SoundCraft Ui series

Ui24R

The SoundCraft Ui series (Ui12, Ui16, Ui24R) is a set of mixers I initially considered back in 2020/2021 when doing research. Big hat tip to Tom Der from SoundCraft, who sent me documentation for the protocol that Ui mixers use for control. (With an explicit “this isn’t supported” note, which is entirely reasonable.) I recently found a more recent version of that documentation on a Crestron Programmers Group (which I joined purely to get access to the doc). In other words, the documentation does exist and is somewhat public, but it’s not as easily accessible as the OSC documentation.

I now have a Ui24R which I’m enjoying playing with. I’ll be blogging more about the protocol later.

Allen and Heath Qu series

Qu-Sb

The Qu series is a range of digital mixers, most of which have physical control surfaces. I have a Qu-Sb on order, but I’m not expecting it to arrive for a while. (They’re back-ordered everywhere, basically.)

These mixers can be controlled by RTP-MIDI – effectively, a MIDI connection over the network. Allen and Heath provide what looks to be pretty comprehensive documentation – although as I haven’t started implementing it yet, it’s hard to say that it’s truly accurate and comprehensive just now. (I’m pretty hopeful though.) I’ve already used MIDI quite a bit for other projects, and I’m hoping I’ll be able to use that abstraction (either with an existing RTP-MIDI driver or cobbling together just the bits I need myself).

RCF M-18

M-18

The M-18 is unique in this set, as all the sockets are on the back rather than the front. That makes it less attractive for rack-mounting, unless you can rack mount it backwards (which would then be fine in terms of audio cables, but annoying for power). One thing it very much has in its favour is price though – it’s the cheapest of any of the mixers in this post.

It isn’t well-documented in terms of control protocol, but there’s a project on GitHub which reliably informs me that it implements OSC (like the X-Air series does). That could be very interesting in terms of seeing how much I’d need to change my OSC code; implementing a protocol with only one peer to test against is always a risky business.

PreSonus StudioLive Series III

StudioLive 16R

There are three options in the StudioLive Series III “R” range: 16R, 24R and 32R. (It looks like Series III mixers without the “R” suffix, i.e. the non-rack-mounted ones, have been discontinued, but that the R range is still going.) The 16R is a mere 1U for racking, which is very appealing – with the downside that inputs are at the front and outputs are at the back. It also trades height for depth – at 305mm deep, it’s deeper than the studio rack cabinet I have, and I suspect I’m not alone in that. As it’s so short though, I’m sure I could find another space for it…

It uses the “ucnet” protocol, which is proprietary to PreSonus and not documented… but there’s a project on GitHub where the author has performed quite a lot of reverse engineering already and documented his findings. This would certainly be an interesting mixer to include, although it’s pricy.

Mackie DL Series

Mackie DL16S

The Mackie DL16S and its 32-input cousins the DL32S and DL32R are all rack-mountable mixers, and the DL32R also features Dante audio networking which I’d love to dabble with some time.

Unfortunately, as far as I can tell there’s no documentation for the Master Fader Control app which is used to control the mixer… which means I’d be stuck reverse-engineering from scratch. While that can be fun, it’s something I really don’t have the time for. I’m not saying I’d object if I found one going for a song on ebay, but I really can’t justify buying one with only a small likelihood of getting anywhere with it. So for the moment at least, the Mackie DL series is unlikely to make it into my shed. (That’s probably a good thing really; arguably one really can have too many mixers.)