Options for .NET’s versioning issues

This post revisits the problem described in Versioning Limitations in .NET, based on reactions to that post and a Twitter discussion which occurred later.

Before getting onto the main topic of the post, I wanted to comment a little on that Twitter discussion. I personally found it frustrating at times, and let that frustration leak out into some of my responses. As Twitter debates go, this was relatively mild, but it was still not as constructive as it might have been, and I take my share of responsibility for that. Sorry, folks. I’m sure that everyone involved – both in that Twitter discussion and more generally in the .NET community – genuinely wants the best outcome here. I’ve attempted to frame this post with that thought at the top of mind, assuming that all opinions on the topic are held and expressed in good faith. As you’ll see, that doesn’t mean I have to agree with everyone, but it hopefully helps me respect arguments I disagree with. I’m happy to make corrections (probably with some sort of history) if I misrepresent things or miss out some crucial pros/cons. The goal of this post is to help the community weigh up options as pragmatically as possible.

Scope, terminology and running example

There are many aspects to versioning, of course. In the future I plan to blog about some interesting impacts of multi-targeting libraries, and the choices involved in writing one library to effectively augment another. But those are topics for another day.

The primary situation I want to explore in this post is the problem of breaking changes, particularly with respect to the diamond dependency problem. I’ve found it helpful to make things very, very concrete when it comes to versioning. So we’ll consider the following situation.

  • A team is building an application called Time Zone Magic. They’re using .NET Core 3.0, and everything they need to use targets .NET Standard 2.0 – so they have no problems there.
  • The team is completely in control of the application, and doesn’t need to worry about any versioning for the application itself. (Just to simplify things…)
  • The application depends on Noda Time, naturally, for all the marvellous ways in which Noda Time can help you with time zones.
  • The application also depends on DarkSkyCore1.

Now DarkSkyCore depends on NodaTime 2.4.7. But the Time Zone Magic application needs to depend on NodaTime 3.0.0 to take advantage of some of the newest functionality. (To clarify, NodaTime 3.0.0 hasn’t actually been released at the time of writing this blog post. This part is concrete but fictional, just like the application itself.) So, we have a diamond dependency problem. It’s entirely possible that DarkSkyCore depends on functionality that’s in NodaTime 2.4.7 but has been removed from 3.0.0. If that’s the case, with the current way .NET works (whether desktop or Core), an exception will occur at some point – exactly how that’s surfaced will probably vary based on a number of factors that I don’t explore in this post.

Currently, as far as I can tell, DarkSkyCore doesn’t refer to any NodaTime types in its public API. We’ll consider what difference this makes in the various options under consideration. I’ll mention a term that I learned during the Twitter conversation: type exchange. I haven’t seen a formal definition of this, but I’m assuming it means one library referring to a type from another library within its public API, e.g. as a parameter or return type, or even as a base class or implemented interface.

The rest of this post consists of some options for what could happen, instead of the current situation. These are just the options I’ve considered; I certainly don’t want to give the impression it’s exhaustive or that we (as a community) should stop trying to think of other options too.

1 I’ve never used this package, and have no opinion on it. It’s just a plausible package to use that depends on NodaTime.

Option 1: Decide to do nothing

It’s always including the status quo as a possible option. We can acknowledge that the current situation has problems (the errors thrown at hard-to-predict places) but we may consider that every alternative is worse, either in terms of end result or cost to implement.

It’s worth bearing in mind that .NET has been around for nearly 20 years, and while this is certainly a known annoyance, I seem to care about it more than most developers I encounter – suggesting that this problem doesn’t make all development on .NET completely infeasible.

I do believe it will hinder the community’s growth in the future though, particularly if (as I hope) the Open Source ecosystem flourishes more and more. I believe one of the reasons this hasn’t bitten the platform very hard so far is that the framework provides so much, and ASP .NET (including Core) dominate on the web framework side of things. In the future, if there are more small, “do one thing well” packages that are popular, the chances of incompatibilities will increase.

Option 2: Never make breaking changes

If we never make breaking changes, there can’t be any incompatibilities. We keep the major version at 1, and it doesn’t matter which minor version anyone depends on.

This has been the approach of the BCL team, very largely (aside from “keeping the major version at 1”) – and is probably appropriate for absolutely “system level” packages. Quite what counts as “system level” is an open question: Noda Time is relatively low level, and attempts to act as a replacement for system types, so does that mean I should never make any breaking changes either?

I could potentially commit to not making any future breaking changes – but deciding to do that right from day 1 would seriously stifle innovation. Releasing version 1.0 is scary enough as it is, without the added pressure of “you own every API mistake in here, forever.” There’s a huge cost involved in the kind of painstaking review of every API element that the BCL team goes through. That’s a cost most open source authors probably can’t bear, and it’s not going to be a good investment of time for 99.9% of libraries… but for the 0.1% that make it and become Json.NET-like in terms of ubiquity, it would be great.

Maybe open source projects should really aim for 2.infinity: version 1.x is to build momentum, and 2.x is forever. Even that leaves me pretty uncomfortable, to be honest.

There’s another wrinkle in this in terms of versioning that may be relevant: platform targeting. One of the reasons I’ve taken a major version bump for NodaTime 3.0 is that I’m dropping support for older versions of .NET. As of NodaTime 3.0, I’m just targeting .NET Standard 2.0. Now that’s a breaking change in that it stops anyone using a platform that doesn’t support .NET Standard 2.0 from taking a dependency on NodaTime 3.0, but it doesn’t have the same compatibility issues as other breaking changes. If the only thing I did for NodaTime 3.0 was to change the target framework, the diamond dependency problem would be a non-issue, I believe: any code that could run 3.0 would be compatible with code expecting 2.x.

Now in Noda Time 3.0 I also removed binary serialization, and I’d be very reluctant not to do that. Should the legacy of binary serialization haunt a library forever? Is there actually some acceptable deprecation period for things like this? I’m not sure.

Without breaking changes, type exchange should always be fine, barring code that relies on bugs in older versions.

Option 3: Put the major version in the package name

The current versioning guidance from Microsoft suggests following SemVer 2.0, but in the breaking changes guidance it states:

CONSIDER publishing a major rewrite of a library as a new NuGet package.

Now, it’s not clear to me what’s considered a “major rewrite”. I implemented a major rewrite of a lot of Noda Time functionality between 1.2 and 1.3, without breaking the API. For 2.0 there was a more significant rewrite, with some breaking changes when we moved to nanosecond precision. It’s worth at least considering the implications of interpreting that as “consider publishing a breaking change as a new NuGet package”. This is effectively putting the version in the package name, e.g. NodaTime1, NodaTime2 etc.

At this point, on a per-package basis, we have no breaking changes, and we’d keep the major version at 1 forever, aside from potentially dropping support for older target platforms, as described in option 2. The differences are:

  • The package names become pretty ugly, in my opinion – something that I’d argue is inherently part of the version number has leaked elsewhere. It’s effectively an admission that .NET and SemVer don’t play nicely together.
  • We don’t see breaking changes in the app example above, because DarkSkyCore would depend on NodaTime2 and the Time Zone Magic application would depend directly on NodaTime3.
  • Global state becomes potentially more problematic: any singleton in both NodaTime2 and NodaTime3 (such as DateTimeZoneProviders.Tzdb for NodaTime) would be a “singleton per package” but not a “global singleton”. With the example of DateTimeZoneProviders.Tzdb, that means different parts of Time Zone Magic could give different results for the same time zone ID, based on whether the data was retrieved via NodaTime2 or NodaTime3. Ouch.
  • Type exchange doesn’t work out of the box: if DarkSkyCore exposed a NodaTime2 type in its API, the Time Zone Magic code wouldn’t be able to take that result and pass it into NodaTime3 code. On the other hand, it would be feasible to create another package, NodaTime2To3 which depended on both NodaTime2 and NodaTime3 and provided conversions where feasible.
  • Having largely-the-same code twice in memory could have performance implications – twice as much JITting etc. This probably isn’t a huge deal in most scenarios, but could be painful in some cases.

No CLR changes are required for this – it’s an option that anyone can adopt right now.

One point that’s interesting to note (well, I think so, anyway!) is that in the Google Cloud Client Libraries we already have a version number in the package name: it’s the version number of the network API that the client library targets. For example, Google.Cloud.Speech.V1 targets the “Speech V1” API. This means there can be a “Speech V2” API with a different NuGet package, and the two packages can be versioned entirely independently. (And you can use both together.) That feels appropriate to me, because it’s part of “the purpose of the package” – whereas the version number of the package itself doesn’t feel right being in the package name.

Option 4: Major version isolation in the CLR

This option is most simply described as “implicit option 3, handled by tooling and the CLR”. (If you haven’t read option 3 yet, please do so now.) Imagine we kept the package name as just NodaTime, but all the tooling involved (MSBuild, NuGet etc) treated “NodaTime v2.x” and “NodaTime v3.x” as independent packages. All the benefits and drawbacks of option 3 would still apply, except the drawback of the version number leaking into the package name.

It’s possible that no CLR changes would be required for this – I don’t know. One of the interesting aspects on the Twitter thread was that AssemblyLoadContext could be used in .NET Core 3 for some of what I’d been describing, but that there were performance implications. Microsoft engineers also reported that what I’d been proposing before would be a huge amount of work and complexity. I have no reason to doubt their estimation here.

My hunch is that if 90% of this could be done in tooling, we should be able to achieve a lot without execution-time penalties. Maybe we’d need to do something like using the major version number as a suffix on the assembly filename, so that NodaTime2.dll and NodaTime3.dll could live side-by-side in the same directory. I could live with that – although I readily acknowledge that it’s a hugely disruptive change. Whatever the implementation, the lack of type exchange would be very disruptive, to the extent that maybe this should be an opt-in (on the part of the package owner) mechanism. “I want more freedom for major version coexistence, at the expense of type exchange.”

Another aspect of feedback in the Twitter thread was that the CLR has supported side-by-side assembly loading for a very long time (forever?) but that customers didn’t use it in practice. Again, I have no reason to dispute the data – but I would say that it’s not evidence that it’s a bad feature. Even great features need to be exposed well before they’ll be used… look at generic variance in the CLR, which was already present in .NET 2.0, but was effectively unused until languages (e.g. C# 4) and the framework (e.g. interfaces such as a IEnumerable) supported it too.

It took a long time to get from “download a zip file, copy the DLLs to a lib directory, and add a reference to that DLL” to “add a reference to a versioned NuGet package which might require its own NuGet dependencies”. I believe many aspects of the versioning story aren’t really exposed in that early xcopy-dependency approach, and so maybe we didn’t take advantage of the CLR facilities nearly as early as we should have don.

If you hadn’t already guessed, this option is the one I’d like to pursue with the most energy. I want to acknowledge that it’s easy for me to write that in a blog post, with none of the cost of fully designing, implementing and supporting such a scheme. Even the exploratory work to determine the full pros and cons, estimate implementation cost etc would be very significant. I’d love the community to help out with this work, while realizing that Microsoft has the most experience and data in this arena.

Option 5: Better error detection

When laying out the example, I noted that for the purposes of DarkSkyCore, NodaTime 2.4.7 and NodaTime 3.0 may be entirely compatible. DarkSkyCore may not need any of the members that have been removed in 3.0. More subtly, even if there are areas of incompatibility, the parts of DarkSkyCore that are accessed by the Time Zone Magic application may not trigger those incompatibilities.

One relatively simple (I believe) first step would be to have a way of determining the first kind of “compatibility despite a major version bump”. I expect that with Mono.Cecil or similar packages, it should be feasible to:

  • List every public member (class, struct, interface, method, property etc) present in NodaTime 3.0, by analyzing NodaTime.dll
  • List every public member from NodaTime 2.4.7 used within DarkSkyCore, by analyzing DarkSkyCore.dll
  • Check whether there’s anything in the second list that’s not in the first. If there isn’t, DarkSkyCore is probably compatible with NodaTime 3.0.0, and Time Zone Magic will be okay.

This ignores reflection of course, along with breaking behavioral changes, but it would at least give a good first indicator. Note that if we’re primarily interested in binary compatibility rather than source compatibility, there are lots of things we can ignore, such as parameter names.

It’s very possible that this tooling already exists, and needs more publicity. Please let me know in comments if so, and I’ll edit a link in here. If it doesn’t already exist, I’ll prototype it some time soon.

If we had such a tool, and it could be made to work reliably (if conservatively), do we want to put that into our normal build procedure? What would configuration look like?

I’m a firm believer that we need a lot more tooling around versioning in general. I recently added a version compatibility detector written by a colleague into our CI scripts, and it’s been wonderful. That’s a relatively “home-grown” project (it lives in the Google Cloud client libraries repository) but something similar could certainly become a first class citizen in the .NET ecosystem.

In my previous blog post, I mentioned the idea of “private dependencies”, and I’d still like to see tooling around this, too. It doesn’t need any CLR or even NuGet support to be useful. If the DarkSkyCore authors could say “I want to depend on NodaTime, but I want to be warned if I ever expose any NodaTime types in my public API” I think that would be tremendously useful as a starting point. Again, it shouldn’t be hard to at least prototype.

Conclusion

As I mentioned at the start, corrections and alternative viewpoints are very welcome in comments, and I’ll assume (unless you say otherwise) that you’re happy for me to edit them into the main post in some form or other (depending on the feedback).

I want to encourage a vigorous but positive discussion about versioning in .NET. Currently I feel slightly impotent in terms of not knowing how to proceed beyond blogging and engaging on Twitter, although I’m hopeful that the .NET Foundation can have a significant role in helping with this. Suggestions for next steps are very welcome as well as technical suggestions.

27 thoughts on “Options for .NET’s versioning issues”

    1. Sort of – then we’d get a build failure due to not being able to satisfy both the app and DarkSkyCore. Whether that’s a good thing or not depends on whether it’s broken when using 3.0…

      Like

  1. .NET has binding redirects. Is this mechanism that could be used to avoid such pitfalls?
    Something like ‘private’ references would also be an interesting solution.

    Like

  2. The new tree trimming / IL trimming feature sounds like it could be hijacked to make #5 a lot easier, since it already walks the execution paths. Would it not be possible to detect those happy cases, where there’s no changes in the used IL?

    Like

  3. “It’s very possible that this tooling already exists, and needs more publicity” — surely it does. I think it must!

    For what it’s worth, back in 2010 I worked on the Windows API Code Pack v2, and as part of publishing the revision information, I wrote exactly that tool. It used reflection to do a diff on the changes from v1 to v2 (there were numerous, both to provide new functionality, as well as to clean up various design and implementation flaws that had existed in v1). It determined what was added, removed, or changed, and emitted XML, along with using XSLT to generate formatted HTML we could use in the release documents (I don’t recall if that HTML was ever actually used or if someone repackaged the information).

    Unfortunately, that particular code is probably unpublished. Its license would be owned by Microsoft, and AFAIK while the Code Pack itself is open source, tooling written as part of publishing it wouldn’t likely be.

    However, my recollection is that it wasn’t really all that hard to write, so I’d be surprised if someone else hasn’t already. Surely I was not the first and only person to ever write that sort of thing. It’s too useful and too simple for it to not have been “reinvented” multiple times, I think.

    Like

  4. The “Better Error Checking” solution kinda/sorta exists today, in some constrained scenarios: Within Xamarin.Android, and presumably anything else that uses the mono linker and links code, an MSB4018 error will be raised (“unhandled exception”, oof; we should fix that) if a referenced method cannot be resolved.

    For example:

    error MSB4018: Mono.Linker.MarkException: Error processing method: ‘System.Void OldAssembly.Class1::.ctor()’ in assembly: ‘OldAssembly.dll’ —> Mono.Cecil.ResolutionException: Failed to resolve System.Int32 Some.Missing.Type::Method()

    This is by no means ideal, but it does do what you want: ensure that all statically referenced methods actually exist in the resulting app.

    Like

  5. I’ve worked on assembly loading in the CLR over 15 years, so I’ll take credit for a lot of this. More seriously, a lot of the design decisions for assembly loading with .NET Framework were made in the late 90s and very early 2000s. The folks working on the assembly loader were very much focused on assembly loading usability topics. Their number one goal was a near neighbor to this, which was solving “DLL hell”. Most developers today probably don’t even fully understand what that actual problem was because it doesn’t exist in practice anymore.

    The most interesting topic isn’t what those original architects of the CLR decided to design, but what the next set of architects decided to both abandon and keep when we built the .NET Core assembly loader. That group of people had 15 years of .NET software development to consider, unlike the original architects, who only had Java and C++ to look at (neither of which fully matches .NET).

    The biggest points for .NET Core are:

    No central assembly store.
    No precise matching on assembly versions (skips the need for binding redirects in most scenarios).
    New light-weight isolation system (assembly loader context) for assembly loading that is purpose-build for only that. AppDomains were heavy-weight and had like 20 different use cases.
    Kept the idea that the product only loads 1 copy of an assembly by default.

    The first point isn’t super related to this post, but the other ones are. The lack of precise matching means that the CLR loader is largely policy-free and enables higher-level systems like NuGet to select the right assembly to load. Assembly loader contexts enable lots of different application and add-in models to be build. Last, the last point fully aligns with the entirety of this post since it’s the policy that these versioning challenges are running up against. That policy is in place because it encourages software to be built that is easy to understand, both conceptually and to observe when it is running. Loading more than one copy of a single library will always produce a more complex system.

    Let’s talk a bit about what a system would look like to enable option #4 to exist. The first thing we’d need to do is to change NuGet to have a different policy in place, either for major versions generally or as opt-in. That would mean that it could/would select >1 version of a package for a given app. That’s the easy part. The next thing we’d need is to load each of those in an assembly loader context, or maybe just the new version. Now we’re starting to climb up the complexity curve We now need a new subsystem in place that automatically knows that this new version needs to be isolated. I’m going to squint hard and say we can handle that one, too (we cannot really in practice, but I’m ignoring that). Next, and this is the real complexity kicker, is that we need a way to interact with the assembly in the loader context. We don’t have remoting anymore, so we’ve got either interface dispatch (new up a class and then cast to an interface that is shared between the isolated loader context and the default loader context) or reflection. Both are bad. Interface dispatch is bad because we now need an interface from somewhere. The developer of NodaTime (in this example) needs to provide that interface. Even if that exists, interfaces only cover a subset of coding patterns (don’t work for statics, for example) and they are only useful if the user code was written in terms of interfaces, which it probably wasn’t. And then reflection is just a dog’s breakfast. It’s reminding me of System.Dynamic and friends.

    So, the short version is that we absolutely can build a solution for #4 and it might be a good solution, but it will be opt-in, where NuGet, NuGet library developers and application developers all opt-in. We’d need to restrict the types of APIs that library developers build, for both type exchange and global state reasons. We’d then need to construct applications in a new way so that they are prepared for libraries to be isolated. You can search on “Java class loaders” and you’ll get a sense of what such a model might look like. I suspect you and others will be initially interested but might get turned off as you read more. There is a lot of complexity that comes with flexibility.

    In closing, I’m definitely interested in making more official use of assembly loader context (as opposed to it just being a random building block in the product that doesn’t get a lot of general use). If folks want to collaborate on this, I’m interesting in participating. My goal in writing such a long answer is increasing visibility on where the problems lay. .NET and C# were built around static, early-bound code patterns. That approach is awesome because it results in high performance and more straightforward diagnosis for issues/crashes. It also keeps source code simple for the same reasons. That doesn’t mean we need to use that pattern 100% of the time, but we should realize we’re giving up a lot when we look at injecting more flexibility into the system. That isn’t a commentary on the issues raised in this post. They are very real problems and I sympathize with them.

    Liked by 3 people

    1. Thank you (massive, massive thank you) for such a detailed answer. It’ll take me a while to digest properly – but this sort of engagement is exactly the kind of thing I’d hoped for from this post.

      I completely agree with the desire to keep/make things as simple as possible – while also avoiding as many problems as possible. I like to think of async/await as solving the incidental complexity of asynchrony, which makes the essential complexity clearer and easier to focus on – I wonder if the same can be possible here.

      Liked by 2 people

  6. Option 4 is effectively jar shading but in a .NET world.

    Jar shading takes care of another issue that crops up if your diamond dependency is exposed in the public API of the things that depend on it: naming the types on those APIs.

    At the IL level, every reference to a symbol comes with an assembly reference, directly or indirectly, so there’s no ambiguity when linking. But when coding, how do you specify in your using that you want Noda Time 3.0 version of e.g. DateInterval vs 2.4 version of DateInterval? You can rely on type inference to solve for locals, but it won’t be long before you need to write some glue code, downstream of the diamond, which needs to talk both versions of library above the diamond.

    Jar shading solves the problem by prefixing the namespace of the duplicated dependency with the thing that includes it.

    I don’t think there’s a better 100% automated way to solve the problem. The alternative is to create a mechanism that converts types at the API boundary to insulate the differences in the upstream versions; in other words, something that converts from 2.4 to 3.0 in getters / return values and converts from 3.0 to 2.4 in setters and in-arguments. It doesn’t take long to see this only works for immutable types used in a value-oriented way; if the API of the duplicated dependency is significantly different, then it’s not feasible.

    Like

  7. It seems that Option 4 would be easier to implement if the .net ecosystem focused on making private dependencies the default or at least an obvious choice when creating NuGet packages.

    That also forces tools to declare their own dependencies rather than picking them up by accident from other packages i.e. if NodaTime2 depends on Json.net internally, then Json.net shouldn’t automatically be available for DarkSkyCore to use for its own purposes without an explicit reference. That is already supported by NuGet, but you have to manually specify that the assembly references are private rather than VS or CLI leading you to do it.

    Like

  8. I seriously think that option 2 is not really an option.
    I consider every change a breaking change if it can be detected by a consumer.
    If it can’t be detected then there is most likely no point in making that change in the first place.

    That being said, i also think that option 1 is equally useless since not doing anything won’t ever solve any issue.

    Options 4 and 5 sound really great but to me it feels like fixing symptoms instead of problems.
    Of course this can most likely be solved at clr level.
    The many commenters before me have already pointed out that this is most likely a very complex thing to do and may be unfeasible.

    I would only ever use option 3 if a newer version actively prevents some people from updating to the newest release.
    This might be due to dropping entire features or dropping support for older versions of .Net like in your example with nodatime 3.
    In my opinion this just calls for a new package for nodatime 3 that can live parallel to nodatime versions < 3.

    Here’s my take on the subject:

    Diamond dependencies occur when library authors do not update dependencies to the latest versions in time.
    For me it has always been a serious commitment when going down the dependency route.
    I also feel strongly about the need to always update to the latest version of a dependency as soon as possible.

    If i decide to have my library be dependent on another package because i don’t want to create the specific functionality myself then this comes at a cost.
    That cost being maintenance overhead for updating any dependency quickly once a new version is released.
    If we’d all do that, the risk of diamond dependencies will be reduced dramatically and therefore the need to find a solution.

    I constantly have to poke coworkers to update dependencies in their packages so we don’t refer to 6 different versions of Json.NET in the product.

    Of course there is always the problem of using packages that are not maintained anymore.
    I have no idea how to solve that problem in a good way.

    Please note that this entire comment is purely opinion.
    I also have zero experience in the topics touched by options 4 and 5.
    However they sound very complex and i’d take the expert’s words for it that they are.

    Like

  9. I find these versioning issues that you present very interesting. I, like most people, get away with ignoring the problem. However, I do think the mere existence of a compatibility issue detection tool could lead to more people dealing with/realizing the problem.

    The initial goal for such a tool could be, as you proposed, to list all public members present in an assembly that are missing in a higher version of that assembly, and intersect that with members used by a third assembly. You stated you indend to build a prototype, but I know you have only so little time, so let me start that for you.

    Let me try to rephrase the goal in two subproblems:
    1) List all differences between two versions of the same assembly.
    2) Filter that list of differences based on relevance to the thrid assembly.

    I started tackling 1) at https://github.com/JeroenBos/Versioning. I have in mind a library which mainly provides a signature like the following

    public IEnumerable GetCompatibilityIssuesBetween(Assembly assembly, Assembly assemblyHigherVersion)

    where ICompatibilityIssue has many implementation types, a bit like a discriminated union, but then open. For now I’ll stick to MissingMemberIssues, but later more dedicated implementing types may be yielded, for example to provide better error message in a tool that e.g. lists textually all potential compatibility issues.

    I realize now that the term compatibility issue is already overloaded: a statically observed difference between two assembly versions is only a potential compatibility issue. It depends on the usage whether it actually is one: when at runtime a MissingMethodException or alike is thrown, now that’s the real compatibility issue.

    As a side note: I use the word ‘differences’ instead of ‘missing members’, having in mind that detecting merely missing members might not be sufficient later. Other implementations of ICompatibilityIssue may represent other potential binary compatibility issues, or even source compatibility issues.

    In the interest of pursuing to solve problem 1), what are all the kinds of potential binary compatibility issues?
    I doubt there’s a list readily available somewhere. Nevertheless the exhaustive list is probably too long, let’s start with a balance between low-hanging fruit and usefulness:
    – Missing type/members
    – Differences in dependency versions
    – Members made abstract/sealed.
    – Removal of implemented interfaces/base types.
    – ? any suggestions?

    As to problem 2), I’m not sure how feasible it is for me to solve, given also a limited amount of spare time :)
    But I would appreciate that if someone tried tackling that, that it be in cooperation with me and this library I’m trying to build.

    Like

    1. Wellll, this is embarrassing, you already have done what I set out to do, and even better might I add.
      If only I had read your reply regarding the ReleaseDiffGenerator better.
      The use of a unique identifier per metadata item simplifies to problem considerably. I’ll have to reevaluate whether what I was doing makes sense anymore.

      Like

  10. Ok I prototyped such a tool, in the hope of saving you the effort. Like I said it consists of two problems: listing all differences between two versions of the same assembly, and secondly, filter on whether they’re relevant for a dependent assembly.
    The initial/current version only detects missing members and members that observably reduced their accessibility.

    I’m looking forward to feedback, or which other compatibility issues you would like to see detected, and I would be happy to implement that.

    I’m also pleased to note that the tool didn’t detect any issues with NodaTime :) It reported
    “Detected 0 potential compatibility issues between assembly NodaTime versions 2.0.0.0 and 2.4.7.0”. The results between versions 1.4.7 and 2.0.0 you can view here, which is probably different from what your tool ReleaseDiffGenerator reports: https://github.com/JeroenBos/Versioning/blob/develop/CLI/Results/NodaTime.1.4.7%20vs%202.0.0.txt

    Part 2 of the problem I haven’t tested very well, due to lack of test examples. I’m interested in some suggestion. The DarkSkyCore example appears to be able to work with all versions of NodeTime. Lastly, I would like to mention that I can only work on this from Nov 13 again.

    Like

    1. I have demonstrated this IClock.Now example here: https://github.com/JeroenBos/Versioning/blob/master/UsageDetector.Tests/Demonstrations.cs

      The current state of the project is that it still detects only missing members and reduced accessibility issues, but seems to do so well. I deem the prototype phase ‘done’.
      Continued developing of the project would require some interest, either renewed from me internally, or from anyone else, which I encourage.

      The next step could be
      – detect other types of issues like in the google cloud tools,
      – filter out more false positives in the UsageDetector,
      – do something with the reported locations of the calls/references that would throw when reached at runtime.

      Lastly, an interesting vision that I have that actually has some practical use is the following:
      We could make an analyzer with the purpose of finding all runtime assembly loading issues. It finds all references in a solution (nested reference too etc), and selects those assemblies that occur multiple times but with different versions.
      We run the issue detector on all those, and report detected issues as diagnostics.

      I suspect this is a rather heavy analyzer and would maybe be better suited for a non-live static analysis tool. Besides, it obviously has many difficult edge cases, like false positives, and it still only covers a narrow scenario, but, you know, it’s about the idea.

      Like

  11. Now that NuGet manages package versions, are there any drawbacks to leaving AssemblyVersion at 1.0.0.0? There’s no GAC for .NET Core, so it seems to me that leaving the assembly version along would let the user select the package versions they want to use while avoiding issues with assembly bindings. Is there any real point to maintaining the assembly version?

    Like

      1. I’m not 100% sure I follow. I’m currently treading water neck-deep in all the debates around versioning and strong naming for open source projects as we’re about to release a bunch of internal projects for the masses and we don’t want to mess up versioning (too badly). I’m a bit inexperienced in this area aside from the tons of research I’ve done recently so I recognize that I’m probably not thinking through all the aspects of this.

        The libraries I built against are indicated in the NuGet package dependencies in my project and it’s resulting NuGet package. When someone installs my package, the NuGet dependencies are resolved based on NuGet package numbers, not assembly numbers.

        I’m guessing you’re referring to some other tooling though that relies on assembly versions, which I admit I haven’t fully considered. I’ve mostly been ignorant of all these complex versioning issues up to this point as it didn’t affect what I was working on. It’s making my head spin a bit, to be honest.

        Like

        1. Imagine a tool which looks at which assembly versions your assembly depends on, and reports on any aspects that aren’t satisfied by the assembly present on disk. If everything just has assembly version 1.0.0, it becomes a lot harder to do anything. What’s the benefit of losing information like this?

          Like

          1. Fair enough. I guess the new assembly loader in .NET Core is a bit more flexible so this might be a moot point, but I figured it might make it easier to interchange assembly versions without the hassle of dealing with assembly redirects and different assembly strong name identities causing issues, which I had read a few posts about.

            Like

          2. I guess a bunch of open source projects keep their assembly version pinned at the major version number and use assembly file version to indicate the “real” version to avoid strong name identity hell.

            Like

Leave a comment