All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Diagnosing slow tests (again)

I’ve previously blogged about a case where tests on .NET Core on Linux were much, much slower than the same tests on Windows. Today I’ve got a very similar problem – but I suspect the cause isn’t going to be the same.

This is my reasonably raw log – skip to the end for a summary of what I found, what went well, and what went badly. Apologies for skipping around in terms of past/present/future tense – being consistent in that respect would be distracting enough to make the rest worse, I think.

Step 0: notice the problem

I spotted this problem because I happened to be looking at the Travis and AppVeyor builds, moving to .NET Core 2.0. Looking at the test runs, I saw that the tests for .NET Core 1.0 were completing in about 15 seconds on AppVeyor (Windows), and 133 seconds on Travis Linux). My experience is that AppVeyor seems to use slightly less powerful hardware than Travis, making this a very odd result.

I looked at several builds, and it seemed consistent across them – very odd.

Step 1: file an issue

My main focus at that point wasn’t to investigate performance – it was to get the build working with .NET Core 2. However, I didn’t want to forget about the problem, so I filed an issue in the Noda Time repo. At this point, I have no idea where the problem is, so the Noda Time repo is the most obvious place to put it. There’s no evidence that it’s a problem with NUnit or .NET Core itself, for example.

Step 2: reproduce locally

I have two machines at home that I use for running the automated benchmarks. One (bagpuss) runs Windows 10, the other (gabriel) runs Ubuntu 16.04. Both are Intel NuCs with i5-5250U processors – I bought one much later than the other, but looked hard to find as close to identical hardware as I could, to enable cross-OS benchmark comparisons without dual booting or virtualization.

I ran the tests on both machines, and copied the TestResult.xml output files to my main dev box for analysis.

First interesting point: the difference isn’t as marked. The tests take 77 seconds on gabriel and 29 seconds on bagpuss. That’s still a lot, but very different to what we’re seeing in CI. It’s possible that the Travis machine is twice as slow as gabriel/bagpuss and that the AppVeyor machine is twice as fast, but that seems unlikely.

Deem that temporarily out of scope as it’s harder to investigate than the fixed-hardware, easy-access problem I have now. File another issue and I can come back to it.

Step 3: Look for individual test case discrepancies

If we’re really lucky, there’ll be a single test case that has a much longer duration on gabriel than on bagpuss, and I can then delve into it.

Investigation:

  • Create a small console app that loads the two XML files, finds all the test-case elements and puts them in a dictionary from fullname to duration (the latter parsed as a dictionary)
    • Oops: some full names aren’t unique (due to using parameters that don’t override ToString, I suspect).
    • Load into a lookup instead, then convert that into a Dictionary by taking the first test from each group. Unlikely to affect the results
  • For each test that’s in both result sets, find the ratio of gabriel/bagpuss
  • Dump the top 20 ratios, and the slowest 10 tests from each set for good measure.

Okay, lots of text handling tests with ratios of more than 2:1, but they’re all very small durations. I doubt that they’ll be responsible for everything, and could easily just be “measuring small amounts of time is flaky”. Nothing to blame yet.

Point to note: this console app was always less than 50 lines of code. LINQ to XML makes this sort of thing really easy.

Step 4: Look for test fixture discrepancies

If individual tests don’t show a massive change, what about test fixtures? Let’s change the program to load test-suite elements where type="TestFixture" which will get us the durations as a per-test-class level. We’ll start by dumping the slowest 10 tests for each environment… and wow, we have a result!

NodaTime.Test.TimeZones.TzdbDateTimeZoneSourceTest takes 65 seconds on Gabriel and 17 seconds on Bagpuss! That’s out of a total of 77 and 29 seconds respectively – so removing those would take us to 12 seconds on each machine.

Concern: that’s assuming no multithreading is going on. I don’t think NUnit parallelizes by default, but it would definitely make things trickier to reason about.

Step 5: check the logs in more detail

Given that we haven’t seen any very slow individual tests, my guess at this point is that the slowness is going to be in test fixture setup. I haven’t looked at the code yet. Let’s have a look at the XML logs for that one test in more detail.

Yikes: all the difference is in GuessZoneIdByTransitionsUncached. That’s a single method, but it’s parameterized for all system time zones.

At this point, a memory stirs: I’ve looked at this before. I have a distinct memory of being in an airport lounge. At the time I wasn’t trying to work out the OS discrepancy so much as why this one test was taking up over half the total unit test time, even on Windows.

Looking at the code for GuessZoneIdByTransitionsUncached, I see I was even diligent enough to create an issue and keep a bit of a log of what I found. Hooray! So, now we know why this is slow in general. But why is it so much slower on gabriel than on bagpuss?

The answer is that gabriel appears to have rather more system time zones than bagpuss: it’s running 424 tests instead of 135. If they ran each test equally fast, it would take bagpuss about 53 seconds to gabriel’s 65 – that’s not so bad.

Step 6: test on CI again

In one fell swoop, we can confirm that this is the problem on Travis and potentially work out why AppVeyor is so fast. If we only run this one test fixture, we can look at how many tests are running on each machine (and in which environments, as I now run the tests on net45 (AppVeyor only), netcoreapp1.0 and netcoreapp2.0.

That’s really easy to do – create a PR which just changes the test filter, let CI run it, then delete the PR.

Results:

  • Travis, both environments run 424 tests – 95 seconds for netcoreapp1.0, 92 seconds for netcoreapp2.0
  • AppVeyor, all environments roughly equal, 135 tests – about 3 seconds

That certainly shows that this is taking the majority of the time on Travis, but we still have two questions:

  • How is AppVeyor so much faster than my Core i7 development machine, where the same tests take about 15 seconds?
  • Travis is still only doing about 3 times as much work as AppVeyor, but it’s taking 30 times as long. Why?

Step 7: spot flag differences

I look at the Travis script. I look at the AppVeyor script. I look at what I’ve been running locally. There’s a difference… on AppVeyor, I’m running with -c Release, whereas locally I’ve used the default (debug). Can a debug build make that much difference?

  • Test locally: running on my dev box with -c Release brings the time down from 15s to 3.9s. Yikes!
  • Update the Travis script to run with -c Release as well

Now the tests take about 30 seconds on Travis. That’s still three times as long as it would be on Windows for the same number of time zones, but it’s much, much closer.

Run the tests with the same flags on gabriel and bagpuss: gabriel takes 19s, bagpuss takes 4.3s. Taking the number of time zones into account, that suggests that netcoreapp1.0 on Linux is about 40% slower than netcoreapp1.0 on Windows for each test. I can investigate that separately, but it does sound like Travis is just using slower hardware than AppVeyor, contrary to my previous suspicions.

Step 8: update Travis script and merge

Even though this one test takes a large proportion of the test time on Linux (where there are three times as many time zones to run through), it’s not a gating factor. Correctness beats speed in this case – I can live with waiting a few minutes for a complete CI test run.

We still have some investigation to do, but we can already make things definitely better by running the release build on Travis. On a new branch – not the one I’ve been investigating on – create a commit that just changes Travis to use -c Release. Observe the total time go down from 133 seconds to 57 seconds. Yay.

Points to note

  • When you’re running performance tests, don’t compare release with debug builds. Doh. This was a rookie mistake.
  • Reproducing a CI problem locally is key: it’s much, much easier to work with local machines than CI. It’s a faster turnaround, and you get easier access to detailed results.
  • Having multiple machines with the same hardware is really useful for performance comparisons.
  • Don’t assume that what sounds like the same set of tests will run the same number of tests on different platforms – if a test is parameterized, check how many tests are actually running in each case.
  • Ad-hoc tools are great – the console app I wrote for steps 3 and 4 was quick to write, and helped zoom in on the problem very simply.

Further work

Almost every diagnostic investigation seems to “end” with something else to look into. In this case, I want to find out why Linux is slower than Windows for the GuessTimeZoneTransitionsUncached test on a per-test basis. Is this a Noda Time difference, or a BCL difference? That can wait.

Diagnostics everywhere!

For a long time, I’ve believed that diagnostic skills are incredibly important for software engineers, and often poorly understood.

The main evidence I see of poor diagnostic skills is on Stack Overflow:

  • “I have a program that does 10 things, and the output isn’t right. Please fix.”
  • “I can’t post a short but complete program, because my code is long.”
  • “I can’t post a short but complete program, because it’s company code.”

I want to tackle this as best I can. If I can move the needle on the average diagnostic skill level, that will probably have more impact on the world than any code I could write. So, this is my new mission, basically.

Expect to see:

  • Blog posts
  • Podcast interviews
  • Screencasts
  • Conference and user group talks

In particular, I’ve now resolved that when I’m facing a diagnostic situation that at least looks like it’ll be interesting, I’m going to start writing a blog post about it immediately. The blog post will be an honest list of the steps I’ve been through, including those that turned out not to be helpful, to act as case studies. In the process, I hope to improve my own skills (by reflecting on whether the blind alleys were avoidable) and provide examples for other developers to follow as well.

There’s a new “Diagnostics” category on my blog – everything will be there.

Using .NET Core 2.0 SDK on Travis

This is just a brief post that I’m hoping may help some people migrate to use .NET Core 2.0 SDK on Travis. TL;DR: see the end of the post for a sample configuration.

Yesterday (August 15th), .NET Core 2.0 was fully released. Wonderfully, Travis already supports it. You just need dotnet: 2.0.0 in your YAML file.

I decided to experiment with upgrading the Noda Time build to require .NET Core 2.0 SDK. To be clear, I’m not doing anything in the code that requires 2.0, but it simplifies my build scripts:

Additionally, supporting netcoreapp2.0 means I’ll be able to run my benchmarks against that as well, which is going to be very interesting. However, my tests still target netcoreapp1.0, and that’s where I ran into problems.

Having done the bare minimum to try using 2.0 (edit global.json and .travis.yml) I ran into this error:

The specified framework 'Microsoft.NETCore.App', version '1.0.5' was not found.
  - Check application dependencies and target a framework version installed at:
      /
  - Alternatively, install the framework version '1.0.5'.

That makes sense. Although netcoreapp2.0 is compatible with netstandard1.0 (i.e. you can use libraries targeted to netstandard1.0 in a 2.0 environment) an application targeting netcoreapp1.0 really needs a 1.0 runtime.

So, we need to install just the runtime as well. I’d expected this to be potentially painful, but it’s really not. You just need an addons section in the YAML file:

addons:
  apt:
    sources:
    - sourceline: 'deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod trusty main'
      key_url: 'https://packages.microsoft.com/keys/microsoft.asc'
    packages:
    - dotnet-hostfxr-1.0.1
    - dotnet-sharedframework-microsoft.netcore.app-1.0.5

Note that in my case, I want netcoreapp1.0 – if you need netcoreapp1.1, you’d probably install dotnet-sharedframework-microsoft.netcore.app-1.1.2.

So, aside from comments etc, my new Travis configuration will look like this:

language: csharp
mono: none
dotnet: 2.0.0
dist: trusty

addons:
  apt:
    sources:
    - sourceline: 'deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-trusty-prod trusty main'
      key_url: 'https://packages.microsoft.com/keys/microsoft.asc'
    packages:
    - dotnet-hostfxr-1.0.1
    - dotnet-sharedframework-microsoft.netcore.app-1.0.5

script:
  - build/travis.sh

I can now build with the 2.0 SDK, and run tests under both netcoreapp1.0 and netcoreapp2.0.

I’m hoping it’s just as simple on AppVeyor when that supports 2.0 as well…

Upcoming speaking engagements

I’ve got a few speaking engagements coming up that I thought it might be worth publicising a bit further. They’re all within just over a week of each other, which is going to be somewhat tiring, but…

Here they are, in chronological order:

Progressive .NET 2017

Progressive .NET 2017 will be held on September 13th-15th in London.

The agenda is already available, and looks pretty awesome. I’d go into details of who I’m most looking forward to hearing, but it’s too tricky to rank talks with that line-up… Of course, I’m mostly looking forward to catching up with people afterwards. (In particular Carl and Richard, whose talk I won’t be able to attend without leaving my own audience somewhat disappointed.)

This is my most Google-centric talk of the three – it’ll show how you can get going on ASP.NET Core in Google Cloud Platform, with both AppEngine Flexible Environment and Google Container Engine. Basically, I get to show off what my team’s been working on for the last couple of years, which is always fun… although this talk is more about the tools than the libraries I specifically work on.

My colleague Mete Atamel will be approaching a related topic from a different angle – he’ll be going into more detail on the ASP.NET Core + Docker + Kubernetes stack.

.NET Conf

.NET Conf is a on September 19th-21st. It’s free and virtual, so there’s basically no barrier to entry here. The precise agenda has yet to be announced, but the speakers shown so far are stellar.

My talk for .NET Conf will be on diagnostics and problem solving: the path from “Something isn’t working” to “Here’s a great, well-researched Stack Overflow question” or “Ah, that’s what was wrong, and I fixed it!” Finding that path is a skill I’m passionate about – expect to hear more from me in all forms over the next few years. (Now that Noda Time is “mostly done” it’s probably going to be my next big goal in personal time.)

Developing a Digital Future

Developing a Digital Future is a one day conference in Moldova, hosted by Amdaris on September 21st.

I’m delighted to be giving the keynote, whose subject will be the name of the conference. What will tech be like in 5, 10, 20 years? How will it change society? How will our jobs as software engineers be different as a result?

In the afternoon, I’ll be doing a round-table event for a subset of attendees – live coding, answering attendee questions etc. Lots of fun!

Imposter Syndrome (part 2)

9 days ago, I posted Imposter Syndrome (part 1) and then immediately listened to Heather Downing‘s excellent NDC talk on the topic.

This is the “reflections afterwards” post I’d expected to write (although slightly more delayed than I’d hoped for). I’m not going to try to recap Heather’s talk, because that wouldn’t do justice to it. Please watch the video instead. This is my set of reflections after Heather’s talk – some of it directly responding to what she said, and other parts simply my own meandering thought process.

Responding to praise

This one’s simple. I’ve often responded to praise in ways that effectively negate the opinion of the other person – saying that I’m really not as smart as they think I am, etc. I suspect there’s some cultural influence there; it’s a fairly British thing to do. But I hadn’t considered the flip side of this not conveying humility on my part, but wrongness on their part.

So I’m going to start to just say “thank you” as far as I can. I think there will still be times where it would be important to try to correct a potentially harmful impression – if someone explains that they’re trying to win technical arguments by quoting me, for example – but most of the time I’ll try to bite my tongue, and just say thanks… and maybe try to shift the conversation onto what they’re doing. (If someone says they’re inspired by me, that’s great – so what have they been inspired to do? Does this give me an opportunity to encourage them further?)

Success and luck

There are two very slightly different nuances to being “lucky” – at least in the way I think about it. The first is a sort of “undeserved positive effects” aspect. “I’m lucky to be married to such a wonderful person” or “I’m lucky to have a natural aptitude for computing” for example. Things you can’t really control much. The second is a sort of “the same sequence of events could have unfolded very differently” aspect. “I’m lucky to have ended up in a job I love without making a career plan” for example.

I fear I’m not transferring the ideas from brain to screen very clearly, but there are two important points:

Firstly, I don’t want anyone to try to emulate me in areas where I’ve been genuinely lucky. I have no doubt that in other situations (with a different set of colleagues, for example) some of my actions could have led to very different results. I’ve always spent quite a lot of time learning by experimentation and community writing – whether that’s on newsgroups, Stack Overflow or blog posts. Some of this has been done on company time, and every company I’ve worked for has (quietly) acknowledged that it’s been a broadly positive thing – so long as it’s not been too excess, of course. Other software engineers – particularly those in jobs where every hour has to be accounted for – could see a very different result to the same actions.

On the other hand, I should probably accept the point Heather made that attributing repeated success to luck is foolish. I don’t think I’m lucky to receive upvotes on Stack Overflow: I make a conscious effort to communicate clearly, and that’s something I’ve put a lot of effort into over several years. Some of the further results could be called lucky: if Stack Overflow hadn’t come on the scene, I’m sure I’d still be writing on newsgroups with a vastly smaller potential audience for answers. The more immediate effect of “If I put effort into writing clearly and researching my subject matter, that effort is appreciated by those who read it” isn’t a matter of luck though.

Writing off success as just luck risks undervaluing processes and practices that are genuinely helpful – as well as potentially giving the impression that we won’t appreciate the hard work and diligence of others. (On the other hand, check your privilege before ascribing all your success to your own graft and/or brilliance.)

Dunning-Kruger harms everyone

Finally, the Dunning-Kruger effect is probably worth fighting against in both aspects.

Those who are overestimating their skills are doing themselves a disservice by appearing arrogant or compounding their ignorance by “meta-ignorance” of the scope of the subject matter. Unless they’re trying to represent a larger entity (a consultancy for example) the impact seems fairly localized.

I’m coming round to the idea that those who are underestimating their skills – and doing so publicly – might be discouraging everyone else. If someone I look up to as an expert in a topic were to only rate themselves as “8 out of 10” in knowledge in that topic, that could make me feel worse about my own understanding of the topic. While I suspect it’s hard for anyone in a culture that values humility to rate their knowledge as “9.5 out of 10” for something, I think it’s important that the real experts do so. Yes, they can still be aware of the areas they struggle in – but there must be some way of expressing that while acknowledging their overall expertise.

Beyond simple discouragement, there’s another aspect of underestimating your own prowess that can prove unhelpful, and that’s in terms of explanations. I’ve always found most (not quite all) security experts hard to understand. They’re so deeply immersed in their own domain that they may not appreciate how many assumptions of shared terminology and understanding they need to remove before they can communicate effectively with “lay” people.

I only give the example of security as one where I personally struggle to learn from people who undoubtedly have knowledge I could benefit from. My fear is that I do the same unwittingly when it comes to areas I’m confident in. I tend to make more conscious effort when discussing date/time issues as I’m aware of the common misunderstandings. What about C# though? When I use language specification terminology in blog posts and Stack Overflow answers, what proportion of readers just get lost quickly? I’m not quite sure what to do about this, beyond becoming more conscious of it as a possibility.

Conclusion

This is by no means an end to my thoughts on Imposter Syndrome or related self-evaluation traits, although it may well be my last blog post on it. No impressive final thoughts, no clever tying up of all the strands… this is only a conclusion in the sense that it’s concluding the post. The end.

Imposter syndrome (part 1)

Note: this is a purely personal post. It has no code in. It’s related to the coding side of my world more than the rest of who I am, so it’s in my coding blog, but if you’re looking for code, just move on.

As part of a Twitter exchange, I discovered that Heather Downing (blog, twitter) would be talking about Imposter Syndrome. This is a topic that interests me for reasons I’ll go into below. I figured it would be interesting to jot down some thoughts on it before Heather’s talk, and then again afterwards, comparing my ideas with hers. As such, I expect to publish this post pretty much as I’m sitting down for the talk, for maximum independence. (Ed: it’s not somewhat rushed. Back when I started it on Tuesday, it seemed like I had loads of time. It’s now Friday morning and I’m desperately trying to get it into some kind of coherent state in time to post…)

There are two ways I could write this post: one very abstract, about “people in general”, and one very concrete, about myself. The first approach would probably end in platitudes and ignorance – the second could well feel like a mixture of egocentricity, arrogance and humble-bragging. I’m going for the second approach anyway, so if you suspect you’ll get annoyed by reading about my thoughts about myself, I suggest moving along. (Read Heather’s blog, for example.)

Aspects of Imposter Syndrome

I think about Imposter Syndrome in three different ways. For some people they may be very similar, but in my case there are pretty radical differences. (For some reason I tend to be a corner case in all kinds of ways. Basically, I’m awkward.)

  • What do people say (and think) about your skills?
  • What skills are expected or required for what you do? (e.g. the job you’re in, success in the community, speaking etc)
  • What do you say about your skills?

I think of Imposter Syndrome as believing that your true set of skills or abilities is lower than the evaluations listed above. It’s possible that the third bullet really doesn’t belong there, but it’s sufficiently closely related that I want to talk about it anyway.

What do people say (and think) about my coding ability?

The Jon Skeet facts page is the first thing that comes to mind, followed by the Toggl “Rescue the Princess” comic. While both of those are clearly meant to be comedy rather than taken seriously, I suspect some of the hyperbole has rubbed off.

I get attention at conferences and on Twitter as if I really showed exceptional coding ability. There’s an assumption that I really can answer anything. People talk about being inspired by me. People still show up to my talks. People ask how I “get so much done” – when I see plenty of people achieving much more than I do. (I slump in front of the TV at night with Holly far more than the question would suggest…)

What skills are expected of me?

Back in 2012, I talked with Scott Hanselman about Imposter Syndrome and “being a phony”. Back then, I still felt like an imposter at Google – and knew that plenty of my colleagues felt the same way.

In my job, I’m expected to be a proficient coder and leader in the area that I’m working on. I was briefly a manager too, but I’m not any more – so my role is fairly purely technical… but that still includes so-called “soft skills” in terms of communication and persuasion. (I hate the term “soft skills” as it implies those skills are less important or difficult. They’re critical, and sadly underdeveloped!)

In the community, I’m expected to be prolific and accurate online, and interesting/engaging in person, particularly while presenting.

What do I say and think about myself?

I try to make the “say” and “think” match. For some definitions of Imposter Syndrome, I don’t think I actually suffer from it at all. In particular:

  • The hyperbole is clearly incorrect. It’s not just fake humility that suggests I’m not really the world’s top programmer… the idea that I could possibly believe that is laughable.
  • These days I’m pretty comfortable with what I do at work. I work hard, I’m working in an area where I feel I have expertise (C# API design) and I get things done. The work I do doesn’t involve the same degree of computer science brilliance as designing Spanner or implementing a self-driving car, but it’s far from trivial.
  • There are thing I’ve done that I’m genuinely proud of beyond my day job – in particular, Noda Time and C# in Depth. I take pride in my Stack Overflow answers too, but they’re slightly different in a way that’s hard to explain. I’m certainly pleased that they’re helpful.
  • I’m confident in my boundaries: I know that I know C# very well and Java pretty well. I know that I have more awareness of date/time issues than the vast majority of developers. I know that I can express ideas clearly, and that that’s important. I’m also well aware of my limitations: if you see any code I write outside Java and C# (e.g. Bash, Python, Javascript) then it’s horrible, and I make no claims otherwise.

Talking about being an “imposter” or “phony” suggests making a claim to competence which is untrue. I don’t think that’s the case here – and that applies to the vast majority of other “famous” developers I know. They’re generally well aware of their limitations too, and their presentations are always about the technology rather than about themselves. There are exceptions to this, and I know my “Abusing C#” talk has sometimes been seen as a self-promotion vehicle instead of the gleeful exploration of C# corner cases it’s intended to be… but in general, I haven’t interacted with many big egos in the tech space. (This may be a matter of the conferences I’ve chosen to go to. I’m aware there are plenty of big-ego jerks around, but I haven’t spoken with many of them…)

Conclusion

I still believe there is a disconnect between even people’s genuine expectations (as opposed to the hyperbole) and the reality of my competence, even though I don’t cultivate those expectations. As a mark of this, I believe my talks are more popular in anticipation than in experience – it’s often a full house, but in the green/yellow/red appraisal afterwards there’s usually a bunch of yellows and even some reds.

Obviously the disconnect gives an ego boost which I try to dampen, but it has genuinely positive aspects too: one of the things people say to or about me is that I inspire them. That’s fantastic. It really doesn’t matter whether they’re buying into a myth: if something they see in me inspires them to “do better” (whatever that may mean for them) then that’s a net benefit to the world, right?

I’m going to keep making it perfectly clear to people that a lot of what is said about me is massively overblown, while keeping confidence in myself as a really pretty decent developer. Am I over-recognized/over-hyped? Yes. Am I an imposter? I don’t think so.

Postscript

Since finishing the above conclusion, I’ve just watched Felienne‘s talk on “Programming is writing is programming” which was the best talk I’ve seen at any conference. Now I feel like an imposter…

Surprise! Creating an instance of an open generic type

This is a brief post documenting a very weird thing I partly came up with on Stack Overflow today.

The context is this question. But to skip to the shock, we end up with code like this:

object x = GetWeirdValue();
// This line prints True. Be afraid - be very afraid!
Console.WriteLine(x.GetType().GetTypeInfo().IsGenericTypeDefinition);

That just shouldn’t happen. You shouldn’t be able to create an instance of an open type – a type that still contains generic type parameters. What does a List<T> (rather than a List<string> or List<int>) mean? It’s like creating an instance of an abstract class.

Before today, I’d have expected it to be impossible – the CLR should just not allow such an object to exist. I now know one – and only one – way to do it. While you can’t get normal field values for an open generic type, you can get constants… after all, they’re constant values, right? That’s fine for most constants, because those can’t be generic types – int, string etc. The only type of constant with a user-defined type is an enum. Enums themselves aren’t generic, of course… but what if it’s nested inside another generic type, like this:

class Generic<T>
{
    enum GenericEnum
    {
        Foo = 0
    }
}

Now Generic<>.Enum is an open type, because it’s nested in an open type. Using Enum.GetValues(typeof(Generic<>.GenericEnum)) fails in the expected way: the CLR complains that it can’t create instances of the open type. But if you use reflection to get at the constant field representing Foo, the CLR magically converts the underlying integer (which is what’s in the IL of course) into an instance of the open type.

Here’s the complete code:

using System;
using System.Reflection;

class Program
{
    static void Main(string[] args)
    {
        object x = GetWeirdValue();
        // This line prints True
        Console.WriteLine(x.GetType().GetTypeInfo().IsGenericTypeDefinition);
    }

    static object GetWeirdValue() =>
        typeof(Generic<>.GenericEnum).GetTypeInfo()
            .GetDeclaredField("Foo")
            .GetValue(null);

    class Generic<T>
    {
        public enum GenericEnum
        {
            Foo = 0
        }
    }
}

… and the corresponding project file, to prove it works for both the desktop and .NET Core…

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFrameworks>netcoreapp1.0;net45</TargetFrameworks>
  </PropertyGroup>

</Project>

Use this at your peril. I expect that many bits of code dealing with reflection would be surprised if they were provided with a value like this…

It turns out I’m not the first one to spot this. (That would be pretty unlikely, admittedly.) Kirill Osenkov blogged two other ways of doing this, discovered by Vladimir Reshetnikov, back in 2014.