Backward compatibility and overloading

I started writing a blog post about versioning in July 2017. I’ve mostly abandoned it, because I think the topic is too vast for a single post. It potentially needs a whole site/wiki/repository devoted to it. I hope to come back to it at some point, because I believe this is a hugely important topic that doesn’t get as much attention as it deserves.

In particular, the .NET ecosystem is mostly embracing semantic versioning – which sounds great, but does rely on us having a common understanding of what’s meant by a “breaking change”. That’s something I’ve been thinking about quite a lot. One aspect which has struck me forcefully recently is how hard it is to avoid breaking changes when using method overloading. That’s what this post is about, mostly because it’s fun.

First, a quick definition…

Source and binary compatibility

If I can recompile my client code with a new version of the library and it all works fine, that’s source compatible. If I can redeploy my existing client binary with a new version of the library without recompiling, that’s binary compatible. Neither of these is a superset of the other:

  • Some changes are both source and binary incompatible, such as removing a whole public type that you depended on.
  • Some changes are source compatible but binary incompatible, such as changing a public static read-only field into a property.
  • Some changes are binary compatible but source incompatible, such as adding an overload which could cause compile-time ambiguity.
  • Some changes are source and binary compatible, such as reimplementing the body of a method.

So what are we talking about?

I’m going to assume that we have a public library at version 1.0, and we wish to add some overloads in version 1.1. We’re following semantic versioning, so we need to be backward compatible. What does that mean we can and can’t do, and is it a simple binary choice?

In various cases, I’ll present library code at version 1.0 and version 1.1, then “client” code (i.e. code that is using the library) which could be broken by the change. I’m not presenting method bodies or class declarations, as they’re largely irrelevant – focus on the signatures. It should be easy to reproduce any of this if you’re interested though. We’ll imagine that all the methods I present are in a class called Library.

Simplest conceivable change, foiled by method group conversions

The simplest example I can imagine would be adding a parameterized method when there’s a parameterless one already:

// Library version 1.0
public void Foo()

// Library version 1.1
public void Foo()
public void Foo(int x)

Even that’s not completely compatible. Consider this client code:

// Client
static void Method()
{
    var library = new Library();
    HandleAction(library.Foo);
}

static void HandleAction(Action action) {}
static void HandleAction(Action<int> action) {}

In library version 1.0, that’s fine. The call to HandleAction performs a method group conversion of library.Foo to create an Action. In library version 1.1, it’s ambiguous: the method group can be converted to either Action or Action<int>. So it’s not source compatible, if we’re going to be strict about it.

At this point you might be tempted to give up and go home, resolving never to add any overloads, ever again. Or maybe we can say that this is enough of a corner case to not consider it breaking. Let’s call method group conversions out of scope for now.

Unrelated reference types

We get into a different kind of territory when we have overloads with the same number of parameters. You might expect this library change to be non-breaking:

// Library version 1.0
public void Foo(string x)

// Library version 1.1
public void Foo(string x)
public void Foo(FileStream x)

That feels like it should be reasonable. The original method still exists, so we won’t be breaking binary compatibility. The simplest way of breaking source compatibility is to have a call that either works in v1.0 but doesn’t in v1.1, or works in both but does something different in v1.1 than it did in v1.0.

How can a call break between v1.0 and v1.1? We’d have to have an argument that’s compatible with both string and FileStream. But they’re unrelated reference types…

The first failure is if we have a user-defined implicit conversion to both string and FileStream:

// Client
class OddlyConvertible
{
    public static implicit operator string(OddlyConvertible c) => null;
    public static implicit operator FileStream(OddlyConvertible c) => null;
}

static void Method()
{
    var library = new Library();
    var convertible = new OddlyConvertible();
    library.Foo(convertible);
}

Hopefully the problem is obvious: what used to be unambiguous via a conversion to string is now ambiguous as the OddlyConvertible type can be implicitly converted to both string and FileStream. (Both overloads are applicable, neither is better than the other.)

It may be reasonable to exclude user-defined conversions… but there’s a far simpler way of making this fail:

// Client
static void Method()
{
    var library = new Library();
    library.Foo(null);
}

The null literal is implicitly convertible to any reference type or any nullable value type… so again, the call becomes ambiguous in the library v1.1. Let’s try again…

Reference type and non-nullable value type parameters

If we don’t mind user-defined conversions, but don’t like null literals causing a problem, how about introducing an overload with a non-nullable value type?

// Library version 1.0
public void Foo(string x)

// Library version 1.1
public void Foo(string x)
public void Foo(int x)

This looks good – library.Foo(null) will be fine in v1.1. So is it safe? Not in C# 7.1…

// Client
static void Method()
{
    var library = new Library();
    library.Foo(default);
}

The default literal is like the null literal, but for any type. It’s really useful – and a complete pain when it comes to overloading and compatibility :(

Optional parameters

Optional parameters bring their own kind of pain. Suppose we have one optional parameter, but wish to add a second. We have three options, shown as 1.1a, 1.1b and 1.1c below.

// Library version 1.0
public void Foo(string x = "")

// Library version 1.1a
// Keep the existing method, but add another one with two optional parameters.
public void Foo(string x = "")
public void Foo(string x = "", string y = "")

// Library version 1.1b
// Just add the parameter to the existing method.
public void Foo(string x = "", string y = "")

// Library version 1.1c
// Keep the old method but make the parameter required, and add a new method
// with both parameters optional.
public void Foo(string x)
public void Foo(string x = "", string y = "")

Let’s think about a client that makes two calls:

// Client
static void Method()
{
    var library = new Library();
    library.Foo();
    library.Foo("xyz");
}

Library 1.1a keeps binary compatiblity, but breaks source compatibility: the library.Foo() is now ambiguous. The C# overloading rules prefer a method that doesn’t need the compiler to “fill in” any optional parameters, but it doesn’t have any preference in terms of how many optional parameters are filled in.

Library 1.1b keeps source compatibility, but breaks binary compatibility. Existing compiled code will expect to call a method with a single parameter – and that method no longer exists.

Library 1.1c keeps binary compatibility, but is potentially odd around source compatibility. The library.Foo() call now resolves to the two-parameter method, whereas library.Foo("xyz") resolves to the one-parameter method (which the compiler prefers over the two-parameter method because it doesn’t need to fill in any optional parameters). That may very well be okay, if the one-parameter version simply delegates to the two-parameter version using the same default value. It feels odd for the meaning of the first call to change though, when the method it used to resolve to still exists.

Optional parameters get even hairer when you don’t want to add a new one at the end, but in the middle – e.g. if you’re trying to follow a convention of keeping an optional CancellationToken parameter at the end. I’m not going to dive into this…

Generics

Type inference is a tricky beast at the best of times. With overload resolution it goes into full-on nightmare mode.

Let’s have a single non-generic method in v1.0, and then add a generic method in v1.1.

// Library version 1.0
public void Foo(object x)

// Library version 1.1
public void Foo(object x)
public void Foo<T>(T x)

That doesn’t seem too awful… but let’s look closely at what happens to client code:

// Client
static void Method()
{
    var library = new Library();
    library.Foo(new object());
    library.Foo("xyz");
}

In library v1.0, both calls resolve to Foo(object) – the only method that exists.

Library v1.1 is binary-compatible: if we use a client executable compiled against v1.0 but running against v1.1, both calls will still use Foo(object). But if we recompile, the second call (and only the second one) will change to using the generic method. Both methods are applicable for both calls.

In the first call, T would be inferred to be object, so the argument-to-parameter-type conversion is just object to object in both cases. Great. The compiler applies a tie-break rule that prefers non-generic methods over generic methods.

In the second call, T would be inferred to be string, so the argument-to-parameter-type conversion is string to object for the original method and string to string for the generic method. The latter is a “better” conversion, so the second method is picked.

If the two methods behave the same way, that’s fine. If they don’t, you’ve broken compatibility in a very subtle way.

Inheritance and dynamic typing

I’m sorry: I just don’t have the energy. Both inheritance and dynamic typing would interact with overload resolution in “fun” and obscure ways.

If you add a method in one level of the inheritance hierarchy which overloads a method in a base class, the new method will be examined first, and picked over the base class method even when the base class method is more specific in terms of argument-to-parameter-type conversions. There’s lots of scope for messing things up.

Likewise with dynamic typing (within the client code), to some extent all bets are off. You’re already sacrificing a lot of compile-time safety… it shouldn’t come as a surprise when things break.

Conclusion

I’ve tried to keep the examples reasonably simple here. It can get really complicated really quickly as soon as you have multiple optional parameters etc.

Versioning is hard and makes my head hurt.

34 thoughts on “Backward compatibility and overloading”

  1. “the method group can be converted to either Action or Action”
    I think you meant “Action or Action” ;)

    Like

  2. I think you lost your angle brackets and generic argument on this line:
    “In library version 1.1, it’s ambiguous: the method group can be converted to either Action or Action. So it’s not source compatible, if we’re going to be strict about it.”

    Like

  3. In “Simplest conceivable change, foiled by method group conversions” should the last ‘Action’ should be ‘Action’?

    Like

  4. Great article, so many subtitles.
    “Library 1.1c keeps binary compatibility, but is potentially odd around source compatibility”. This caught me off guard, I didn’t think it would have been binary compatible; Didn’t know optional params were hardcoded at the call site by the compiler.
    Thanks!

    Like

    1. “Didn’t know optional params were hardcoded at the call site by the compiler.”

      This causes other problems as well. If you change the default value, it is also a breaking change, ex.:

      // Version 1.0
      public void SetVisibility(Visibility visibility = Visibility.Hidden)

      // Version 1.1
      public void SetVisibility(Visibility visibility = Visibility.Collapsed)

      A call to SetVisibility() (in 1.0) is compiled as if you had written SetVisibility(Visibility.Hidden). This stays the same even after updating the library’s assembly to 1.1. But if you recompile, the call is changed by the compiler to SetVisibility(Visibility.Collapsed).

      Liked by 1 person

  5. Great article!

    From the open-closed principle point of view, can we say that versioning with overloads brakes it? I ask this because the contract of the library was not modified but extended in this case. But the client was broken or the behavior can take a wrong way. In other words, the library is happy with OCP but its clients not (or at least clients which uses some tricky scenarios).

    Thank you

    Like

  6. In general, I suppose most would prefer to maintain both binary and source compatibility. Are there approaches to ensuring these that you’ve found work best?

    Are we left with requiring clients to not “do weird things” without knowing the scope of weird things they might do?

    Like

    1. “Thinking carefully” is about all I’ve got so far. I believe there are some tools to help, and more could certainly be written, but as I showed here, method group conversions make even the most innocuous of overload additions breaking.

      Like

  7. All this to say nothing of extension methods: if someone writes a Foo() extension method in their codebase for a type provided by your library, and then you add a Foo() instance method to that type, recompiling will cause calls which previously went to the extension method to go to the instance method.

    So you don’t even need to use overloading to break source compat! Perhaps we should all just give up programming for ever.

    Like

    1. Oh you certainly don’t need to use overloading to break compatibility. Or extension methods – there are any number of things that can break compatibility. I was just focusing on overloading as one source of problems here :)

      Like

  8. Versioning is hard and makes my head hurt. – so do “Foos” in confusing academic examples :-| You seem to know a lot about programming but such code makes your articles incomprihensible – I stopped reading at the first “foo”. Could you use more real examples with cars, animals, geometric shapes, anything but Foo and Method? That’d be great!

    Like

    1. I guess it’s a matter of personal preference – I would use animals for inheritance examples, but there seems little point in introducing a real world concept that has its own implications and baggage when the purpose is just “a class with a method”. I’m sorry to hear you don’t like these names, but unless I get the same feedback from others, I’ll keep doing what I’m doing.

      Like

  9. Under “Generics”, did you mean “Library v1.1 is binary-compatible” instead of “Library v1.1 is backward-compatible”?

    Like

  10. I find your information regarding optional parameters quite odd.
    I know that I mostly code in VB#, but I have never been able to add an override like you mention.
    public void Foo(string x = “”)
    public void Foo(string x = “”, string y = “”)
    In VB those are this:
    Public Sub Foo(Optional x As String = “”)
    Public Sub Foo(Optional x As String = “”, Optional y As String = “”)

    When I have tried to do this, the compiler tells me that I cannot overload a method that differs only by optional parameters.
    This error makes complete sense to me because, if that were allowed, then the compiler could decide to call either of the methods when 1 or no parameters are specified. It would have no way to decide which is the legitimate version to call.
    Also, if the actual method for the first is the following, then the results would be different depending on which method the compiler decided to call.
    Public Sub Foo(Optional x As String = “”)
    Foo(x, “Some value I decided to send”)
    End Sub
    As you can see, if the client called Foo() or Foo(“test”), since the y parameter of the second overload is optional, there is nothing that .NET could use to determine which overload of Foo that should be called, since, based on the method signature, both would be valid, however the end results could be drastically different.

    Like

    1. That’s just a difference between VB and C#. It’s entirely valid in C# to have both overloads in C#, but calling just Foo() will lead to the compiler complaining that it’s invalid.

      Like

  11. I forgot to also say that based on this, I am not sure how one could add a method overload “which could cause compile-time ambiguity” as I, from everything I have experienced, am unable to do this.

    Like

    1. Well your previous example showed it – maybe it’s prevented in VB, but it’s not in C#. You don’t even have to add an overload with an extended set of parameters. Consider this code:

      public class Test
      {
          static void Main()
          {
              Foo();
          }
      
          static void Foo(string x = "")
          {
          }
      
          static void Foo(int x = 10)
          {
          }
      }
      

      With only one of those Foo methods, it compiles. With both, the call is ambiguous.

      Like

Leave a comment