All posts by jonskeet

Mad props to @arcaderage for the "Princess Rescue" image - see https://toggl.com/programming-princess for the full original

Imposter Syndrome (part 2)

9 days ago, I posted Imposter Syndrome (part 1) and then immediately listened to Heather Downing‘s excellent NDC talk on the topic.

This is the “reflections afterwards” post I’d expected to write (although slightly more delayed than I’d hoped for). I’m not going to try to recap Heather’s talk, because that wouldn’t do justice to it. Hopefully the Vimeo NDC channel will have a video of it at some point – I imagine the NDC folks are very busy in terms of editing, format conversion etc. This is my reflections after Heather’s talk – some of it directly responding to what she said, and other parts simply my own meandering thought process.

Responding to praise

This one’s simple. I’ve often responded to praise in ways that effectively negate the opinion of the other person – saying that I’m really not as smart as they think I am, etc. I suspect there’s some cultural influence there; it’s a fairly British thing to do. But I hadn’t considered the flip side of this not conveying humility on my part, but wrongness on their part.

So I’m going to start to just say “thank you” as far as I can. I think there will still be times where it would be important to try to correct a potentially harmful impression – if someone explains that they’re trying to win technical arguments by quoting me, for example – but most of the time I’ll try to bite my tongue, and just say thanks… and maybe try to shift the conversation onto what they’re doing. (If someone says they’re inspired by me, that’s great – so what have they been inspired to do? Does this give me an opportunity to encourage them further?)

Success and luck

There are two very slightly different nuances to being “lucky” – at least in the way I think about it. The first is a sort of “undeserved positive effects” aspect. “I’m lucky to be married to such a wonderful person” or “I’m lucky to have a natural aptitude for computing” for example. Things you can’t really control much. The second is a sort of “the same sequence of events could have unfolded very differently” aspect. “I’m lucky to have ended up in a job I love without making a career plan” for example.

I fear I’m not transferring the ideas from brain to screen very clearly, but there are two important points:

Firstly, I don’t want anyone to try to emulate me in areas where I’ve been genuinely lucky. I have no doubt that in other situations (with a different set of colleagues, for example) some of my actions could have led to very different results. I’ve always spent quite a lot of time learning by experimentation and community writing – whether that’s on newsgroups, Stack Overflow or blog posts. Some of this has been done on company time, and every company I’ve worked for has (quietly) acknowledged that it’s been a broadly positive thing – so long as it’s not been too excess, of course. Other software engineers – particularly those in jobs where every hour has to be accounted for – could see a very different result to the same actions.

On the other hand, I should probably accept the point Heather made that attributing repeated success to luck is foolish. I don’t think I’m lucky to receive upvotes on Stack Overflow: I make a conscious effort to communicate clearly, and that’s something I’ve put a lot of effort into over several years. Some of the further results could be called lucky: if Stack Overflow hadn’t come on the scene, I’m sure I’d still be writing on newsgroups with a vastly smaller potential audience for answers. The more immediate effect of “If I put effort into writing clearly and researching my subject matter, that effort is appreciated by those who read it” isn’t a matter of luck though.

Writing off success as just luck risks undervaluing processes and practices that are genuinely helpful – as well as potentially giving the impression that we won’t appreciate the hard work and diligence of others. (On the other hand, check your privilege before ascribing all your success to your own graft and/or brilliance.)

Dunning-Kruger harms everyone

Finally, the Dunning-Kruger effect is probably worth fighting against in both aspects.

Those who are overestimating their skills are doing themselves a disservice by appearing arrogant or compounding their ignorance by “meta-ignorance” of the scope of the subject matter. Unless they’re trying to represent a larger entity (a consultancy for example) the impact seems fairly localized.

I’m coming round to the idea that those who are underestimating their skills – and doing so publicly – might be discouraging everyone else. If someone I look up to as an expert in a topic were to only rate themselves as “8 out of 10” in knowledge in that topic, that could make me feel worse about my own understanding of the topic. While I suspect it’s hard for anyone in a culture that values humility to rate their knowledge as “9.5 out of 10” for something, I think it’s important that the real experts do so. Yes, they can still be aware of the areas they struggle in – but there must be some way of expressing that while acknowledging their overall expertise.

Beyond simple discouragement, there’s another aspect of underestimating your own prowess that can prove unhelpful, and that’s in terms of explanations. I’ve always found most (not quite all) security experts hard to understand. They’re so deeply immersed in their own domain that they may not appreciate how many assumptions of shared terminology and understanding they need to remove before they can communicate effectively with “lay” people.

I only give the example of security as one where I personally struggle to learn from people who undoubtedly have knowledge I could benefit from. My fear is that I do the same unwittingly when it comes to areas I’m confident in. I tend to make more conscious effort when discussing date/time issues as I’m aware of the common misunderstandings. What about C# though? When I use language specification terminology in blog posts and Stack Overflow answers, what proportion of readers just get lost quickly? I’m not quite sure what to do about this, beyond becoming more conscious of it as a possibility.

Conclusion

This is by no means an end to my thoughts on Imposter Syndrome or related self-evaluation traits, although it may well be my last blog post on it. No impressive final thoughts, no clever tying up of all the strands… this is only a conclusion in the sense that it’s concluding the post. The end.

Imposter syndrome (part 1)

Note: this is a purely personal post. It has no code in. It’s related to the coding side of my world more than the rest of who I am, so it’s in my coding blog, but if you’re looking for code, just move on.

As part of a Twitter exchange, I discovered that Heather Downing (blog, twitter) would be talking about Imposter Syndrome. This is a topic that interests me for reasons I’ll go into below. I figured it would be interesting to jot down some thoughts on it before Heather’s talk, and then again afterwards, comparing my ideas with hers. As such, I expect to publish this post pretty much as I’m sitting down for the talk, for maximum independence. (Ed: it’s not somewhat rushed. Back when I started it on Tuesday, it seemed like I had loads of time. It’s now Friday morning and I’m desperately trying to get it into some kind of coherent state in time to post…)

There are two ways I could write this post: one very abstract, about “people in general”, and one very concrete, about myself. The first approach would probably end in platitudes and ignorance – the second could well feel like a mixture of egocentricity, arrogance and humble-bragging. I’m going for the second approach anyway, so if you suspect you’ll get annoyed by reading about my thoughts about myself, I suggest moving along. (Read Heather’s blog, for example.)

Aspects of Imposter Syndrome

I think about Imposter Syndrome in three different ways. For some people they may be very similar, but in my case there are pretty radical differences. (For some reason I tend to be a corner case in all kinds of ways. Basically, I’m awkward.)

  • What do people say (and think) about your skills?
  • What skills are expected or required for what you do? (e.g. the job you’re in, success in the community, speaking etc)
  • What do you say about your skills?

I think of Imposter Syndrome as believing that your true set of skills or abilities is lower than the evaluations listed above. It’s possible that the third bullet really doesn’t belong there, but it’s sufficiently closely related that I want to talk about it anyway.

What do people say (and think) about my coding ability?

The Jon Skeet facts page is the first thing that comes to mind, followed by the Toggl “Rescue the Princess” comic. While both of those are clearly meant to be comedy rather than taken seriously, I suspect some of the hyperbole has rubbed off.

I get attention at conferences and on Twitter as if I really showed exceptional coding ability. There’s an assumption that I really can answer anything. People talk about being inspired by me. People still show up to my talks. People ask how I “get so much done” – when I see plenty of people achieving much more than I do. (I slump in front of the TV at night with Holly far more than the question would suggest…)

What skills are expected of me?

Back in 2012, I talked with Scott Hanselman about Imposter Syndrome and “being a phony”. Back then, I still felt like an imposter at Google – and knew that plenty of my colleagues felt the same way.

In my job, I’m expected to be a proficient coder and leader in the area that I’m working on. I was briefly a manager too, but I’m not any more – so my role is fairly purely technical… but that still includes so-called “soft skills” in terms of communication and persuasion. (I hate the term “soft skills” as it implies those skills are less important or difficult. They’re critical, and sadly underdeveloped!)

In the community, I’m expected to be prolific and accurate online, and interesting/engaging in person, particularly while presenting.

What do I say and think about myself?

I try to make the “say” and “think” match. For some definitions of Imposter Syndrome, I don’t think I actually suffer from it at all. In particular:

  • The hyperbole is clearly incorrect. It’s not just fake humility that suggests I’m not really the world’s top programmer… the idea that I could possibly believe that is laughable.
  • These days I’m pretty comfortable with what I do at work. I work hard, I’m working in an area where I feel I have expertise (C# API design) and I get things done. The work I do doesn’t involve the same degree of computer science brilliance as designing Spanner or implementing a self-driving car, but it’s far from trivial.
  • There are thing I’ve done that I’m genuinely proud of beyond my day job – in particular, Noda Time and C# in Depth. I take pride in my Stack Overflow answers too, but they’re slightly different in a way that’s hard to explain. I’m certainly pleased that they’re helpful.
  • I’m confident in my boundaries: I know that I know C# very well and Java pretty well. I know that I have more awareness of date/time issues than the vast majority of developers. I know that I can express ideas clearly, and that that’s important. I’m also well aware of my limitations: if you see any code I write outside Java and C# (e.g. Bash, Python, Javascript) then it’s horrible, and I make no claims otherwise.

Talking about being an “imposter” or “phony” suggests making a claim to competence which is untrue. I don’t think that’s the case here – and that applies to the vast majority of other “famous” developers I know. They’re generally well aware of their limitations too, and their presentations are always about the technology rather than about themselves. There are exceptions to this, and I know my “Abusing C#” talk has sometimes been seen as a self-promotion vehicle instead of the gleeful exploration of C# corner cases it’s intended to be… but in general, I haven’t interacted with many big egos in the tech space. (This may be a matter of the conferences I’ve chosen to go to. I’m aware there are plenty of big-ego jerks around, but I haven’t spoken with many of them…)

Conclusion

I still believe there is a disconnect between even people’s genuine expectations (as opposed to the hyperbole) and the reality of my competence, even though I don’t cultivate those expectations. As a mark of this, I believe my talks are more popular in anticipation than in experience – it’s often a full house, but in the green/yellow/red appraisal afterwards there’s usually a bunch of yellows and even some reds.

Obviously the disconnect gives an ego boost which I try to dampen, but it has genuinely positive aspects too: one of the things people say to or about me is that I inspire them. That’s fantastic. It really doesn’t matter whether they’re buying into a myth: if something they see in me inspires them to “do better” (whatever that may mean for them) then that’s a net benefit to the world, right?

I’m going to keep making it perfectly clear to people that a lot of what is said about me is massively overblown, while keeping confidence in myself as a really pretty decent developer. Am I over-recognized/over-hyped? Yes. Am I an imposter? I don’t think so.

Postscript

Since finishing the above conclusion, I’ve just watched Felienne‘s talk on “Programming is writing is programming” which was the best talk I’ve seen at any conference. Now I feel like an imposter…

Surprise! Creating an instance of an open generic type

This is a brief post documenting a very weird thing I partly came up with on Stack Overflow today.

The context is this question. But to skip to the shock, we end up with code like this:

object x = GetWeirdValue();
// This line prints True. Be afraid - be very afraid!
Console.WriteLine(x.GetType().GetTypeInfo().IsGenericTypeDefinition);

That just shouldn’t happen. You shouldn’t be able to create an instance of an open type – a type that still contains generic type parameters. What does a List<T> (rather than a List<string> or List<int>) mean? It’s like creating an instance of an abstract class.

Before today, I’d have expected it to be impossible – the CLR should just not allow such an object to exist. I now know one – and only one – way to do it. While you can’t get normal field values for an open generic type, you can get constants… after all, they’re constant values, right? That’s fine for most constants, because those can’t be generic types – int, string etc. The only type of constant with a user-defined type is an enum. Enums themselves aren’t generic, of course… but what if it’s nested inside another generic type, like this:

class Generic<T>
{
    enum GenericEnum
    {
        Foo = 0
    }
}

Now Generic<>.Enum is an open type, because it’s nested in an open type. Using Enum.GetValues(typeof(Generic<>.GenericEnum)) fails in the expected way: the CLR complains that it can’t create instances of the open type. But if you use reflection to get at the constant field representing Foo, the CLR magically converts the underlying integer (which is what’s in the IL of course) into an instance of the open type.

Here’s the complete code:

using System;
using System.Reflection;

class Program
{
    static void Main(string[] args)
    {
        object x = GetWeirdValue();
        // This line prints True
        Console.WriteLine(x.GetType().GetTypeInfo().IsGenericTypeDefinition);
    }

    static object GetWeirdValue() =>
        typeof(Generic<>.GenericEnum).GetTypeInfo()
            .GetDeclaredField("Foo")
            .GetValue(null);

    class Generic<T>
    {
        public enum GenericEnum
        {
            Foo = 0
        }
    }
}

… and the corresponding project file, to prove it works for both the desktop and .NET Core…

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFrameworks>netcoreapp1.0;net45</TargetFrameworks>
  </PropertyGroup>

</Project>

Use this at your peril. I expect that many bits of code dealing with reflection would be surprised if they were provided with a value like this…

It turns out I’m not the first one to spot this. (That would be pretty unlikely, admittedly.) Kirill Osenkov blogged two other ways of doing this, discovered by Vladimir Reshetnikov, back in 2014.

All about java.util.Date

This post is an attempt to reduce the number of times I need to explain things in Stack Overflow comments. You may well be reading it via a link from Stack Overflow – I intend to refer to this post frequently in comments. Note that this post is mostly not about text handling – see my post on common mistakes in date/time formatting and parsing for more details on that.

There are few classes which cause so many similar questions on Stack Overflow as java.util.Date. There are four causes for this:

  • Date and time work is fundamentally quite complicated and full of corner cases. It’s manageable, but you do need to put some time into understanding it.
  • The java.util.Date class is awful in many ways (details given below).
  • It’s poorly understood by developers in general.
  • It’s been badly abused by library authors, adding further to the confusion.

TL;DR: java.util.Date in a nutshell

The most important things to know about java.util.Date are:

  • You should avoid it if you possibly can. Use java.time.* if possible, or the ThreeTen-Backport (java.time for older versions, basically) or Joda Time if you’re not on Java 8 yet.
    • If you’re forced to use it, avoid the deprecated members. Most of them have been deprecated for nearly 20 years, and for good reason.
    • If you really, really feel you have to use the deprecated members, make sure you really understand them.
  • A Date instance represents an instant in time, not a date. Importantly, that means:
    • It doesn’t have a time zone.
    • It doesn’t have a format.
    • It doesn’t have a calendar system.

Now, onto the details…

What’s wrong with java.util.Date?

java.util.Date (just Date from now on) is a terrible type, which explains why so much of it was deprecated in Java 1.1 (but is still being used, unfortunately).

Design flaws include:

  • Its name is misleading: it doesn’t represent a Date, it represents an instant in time. So it should be called Instant – as its java.time equivalent is.
  • It’s non-final: that encourages poor uses of inheritance such as java.sql.Date (which is meant to represent a date, and is also confusing due to having the same short-name)
  • It’s mutable: date/time types are natural values which are usefully modeled by immutable types. The fact that Date is mutable (e.g. via the setTime method) means diligent developers end up creating defensive copies all over the place.
  • It implicitly uses the system-local time zone in many places – including toString() – which confuses many developers. More on this in the “What’s an instant” section
  • Its month numbering is 0-based, copied from C. This has led to many, many off-by-one errors.
  • Its year numbering is 1900-based, also copied from C. Surely by the time Java came out we had an idea that this was bad for readability?
  • Its methods are unclearly named: getDate() returns the day-of-month, and getDay() returns the day-of-week. How hard would it have been to give those more descriptive names?
  • It’s ambiguous about whether or not it supports leap seconds: “A second is represented by an integer from 0 to 61; the values 60 and 61 occur only for leap seconds and even then only in Java implementations that actually track leap seconds correctly.” I strongly suspect that most developers (including myself) have made plenty of assumptions that the range for getSeconds() is actually in the range 0-59 inclusive.
  • It’s lenient for no obvious reason: “In all cases, arguments given to methods for these purposes need not fall within the indicated ranges; for example, a date may be specified as January 32 and is interpreted as meaning February 1.” How often is that useful?

I could find more problems, but they would be getting pickier. That’s a plentiful list to be going on with. On the plus side:

  • It unambiguously represents a single value: an instant in time, with no associated calendar system, time zone or text format, to a precision of milliseconds.

Unfortunately even this one “good aspect” is poorly understood by developers. Let’s unpack it…

What’s an “instant in time”?

Note: I’m ignoring relativity and leap seconds for the whole of the rest of this post. They’re very important to some people, but for most readers they would just introduce more confusion.

When I talk about an “instant” I’m talking about the sort of concept that could be used to identify when something happened. (It could be in the future, but it’s easiest to think about in terms of a past occurrence.) It’s independent of time zone and calendar system, so multiple people using their “local” time representations could talk about it in different ways.

Let’s use a very concrete example of something that happened somewhere that doesn’t use any time zones we’re familiar with: Neil Armstrong walking on the moon. The moon walk started at a particular instant in time – if multiple people from around the world were watching at the same time, they’d all (pretty much) say “I can see it happening now” simultaneously.

If you were watching from mission control in Houston, you might have thought of that instant as “July 20th 1969, 9:56:20 pm CDT”. If you were watching from London, you might have thought of that instant as “July 21st 1969, 3:26:20 am BST”. If you were watching from Riyadh, you might have thought of that instant as “Jumādá 7th 1389, 5:56:20 am (+03)” (using the Umm al-Qura calendar). Even though different observers would see different times on their clocks – and even different years – they would still be considering the same instant. They’d just be applying different time zones and calendar systems to convert from the instant into a more human-centric concept.

So how do computers represent instants? They typically store an amount of time before or after a particular instant which is effectively an origin. Many systems use the Unix epoch, which is the instant represented in the Gregorian calendar in UTC as midnight at the start of January 1st 1970. That doesn’t mean the epoch is inherently “in” UTC – the Unix epoch could equally well be defined as “the instant at which it was 7pm on December 31st 1969 in New York”.

The Date class uses “milliseconds since the Unix epoch” – that’s the value returned by getTime(), and set by either the Date(long) constructor or the setTime() method. As the moon walk occurred before the Unix epoch, the value is negative: it’s actually -14159020000.

To demonstrate how Date interacts with the system time zone, let’s show the three time zones mentioned before – Houston (America/Chicago), London (Europe/London) and Riyadh (Asia/Riyadh). It doesn’t matter what the system time zone is when we construct the date from its epoch-millis value – that doesn’t depend on the local time zone at all. But if we use Date.toString(), that converts to the current default time zone to display the result. Changing the default time zone does not change the Date value at all. The internal state of the object is exactly the same. It still represents the same instant, but methods like toString(), getMonth() and getDate() will be affected. Here’s sample code to show that:

import java.util.Date;
import java.util.TimeZone;

public class Test {

    public static void main(String[] args) {
        // The default time zone makes no difference when constructing
        // a Date from a milliseconds-since-Unix-epoch value
        Date date = new Date(-14159020000L);

        // Display the instant in three different time zones
        TimeZone.setDefault(TimeZone.getTimeZone("America/Chicago"));
        System.out.println(date);

        TimeZone.setDefault(TimeZone.getTimeZone("Europe/London"));
        System.out.println(date);

        TimeZone.setDefault(TimeZone.getTimeZone("Asia/Riyadh"));
        System.out.println(date);

        // Prove that the instant hasn't changed...
        System.out.println(date.getTime());
    }
}

The output is as follows:

Sun Jul 20 21:56:20 CDT 1969
Mon Jul 21 03:56:20 GMT 1969
Mon Jul 21 05:56:20 AST 1969
-14159020000

The “GMT” and “AST” abbreviations in the output here are highly unfortunate – java.util.TimeZone doesn’t have the right names for pre-1970 values in all cases. The times are right though.

Common questions

How do I convert a Date to a different time zone?

You don’t – because a Date doesn’t have a time zone. It’s an instant in time. Don’t be fooled by the output of toString(). That’s showing you the instant in the default time zone. It’s not part of the value.

If your code takes a Date as an input, any conversion from a “local time and time zone” to an instant has already occurred. (Hopefully it was done correctly…)

If you start writing a method with a signature like this, you’re not helping yourself:

// A method like this is always wrong
Date convertTimeZone(Date input, TimeZone fromZone, TimeZone toZone)

How do I convert a Date to a different format?

You don’t – because a Date doesn’t have a format. Don’t be fooled by the output of toString(). That always uses the same format, as described by the documentation.

To format a Date in a particular way, use a suitable DateFormat (potentially a SimpleDateFormat) – remembering to set the time zone to the appropriate zone for your use.

Diversity and speaking engagements

Background

I’m in the privileged position of receiving more invitations to speak (at conferences, user groups and podcasts) than I can realistically agree to. I’ve decided to start applying some new criteria to how I pick which ones I go to1.

However, over the last couple of years as feminism has become an increasingly important part of my life I’ve found myself saddened by the lack of diversity at conferences, both in terms of speakers and attendees. It’s not uncommon for me to spend the first couple of minutes of a conference talk commenting on this, and asking the audience (broadly white men) to think about what they can do to improve this, understanding that it’s our problem to fix. I don’t know whether that’s had any impact, but I’m likely to keep doing it anyway. (Drip, drip, drip.)

I should point out that some conferences do pretty well. When I was invited to speak at NorDevCon for the second time, a large part of why I accepted was because of the diversity of both speakers and attendees. (It varies by year, of course.) When I recently spoke at Web Summit the attendee gender diversity was the best I’ve ever seen – along with a Women in Tech lounge that was certainly busy.

Anyway, to do my part in encouraging diversity, from now on when I’m invited to speak, I’m going to refer the organizers to this post.

My requirements for speaking engagements

  • Conferences must have a published Code of Conduct, including incident resolution steps. Where possible, this should be highlighted in opening remarks (typically before the keynote). It’s important that all speakers and attendees feel both safe and welcome – and members of under-represented groups are the most likely not to feel safe and welcome.
  • Organizers must take active steps to encourage speaker diversity. One common challenge to diversity initiatives is that they mean compromising on quality, but I disagree with the assumption behind the challenge. There are many high-quality presenters who are women, but it may mean making more effort to find them. (It’s all too easy to rely on the “regulars” in the tech speaking circles.) If an organizer publishes how they’re trying to encourage diversity, that’s definitely a bonus. I’d at least expect organizers to keep track of how they’re doing over time, and be willing to privately share how they’re trying to improve. It’s hard to give concrete limits here as I may need to make a decision before the rest of the speaker list is decided, but any time I find myself at a conference where 25% or less of the speakers are non-white-men, I’ll be vocally disappointed. Over time, I expect this number to get higher.
  • Ideally, publishing data on attendee diversity over time, with a public plan for improvements. This may not always be possible historically, as the data may not have been captured – but I doubt that it’s very hard to add it to future registration processes. (I’d encourage organizers to think beyond binary gender identification when adding this, too.)
  • I won’t personally speak in any white-male-only panels of three people or more. Ideally, I’d like to see efforts for there not to be any such panels.

If conferences and user groups don’t want to make any efforts to improve diversity, that’s their choice – but I hope that they’ll find it increasingly difficult to attract good speakers, and I’m going to be a tiny part of that scarcity.

How I’m happy to help organizers

On a positive side, I’m happy to:

  • Try to help organizers find diverse speakers. I don’t currently have much in the way of a contact list on this front yet, but that’s something for me to try to improve.
  • Help potential speakers tune their abstracts or presentations in private. I know that presenting for the first time can be daunting, particularly if you feel under-represented within the industry to start with. I don’t have any experience on this sort of coaching, but if I can be helpful at all, I’ll do my best.
  • Co-present with someone who might otherwise worry that they wouldn’t get much attendance, etc. In particular, I’d be very happy to be an on-stage guinea-pig, learning from another presenter in a field I’m not familiar with, and asking questions along the way in an active tutorial style. (I’d expect any partnership like this to be primarily about highlighting the other speaker’s knowledge – it mustn’t be tokenism just to get them on stage while I waffle about C# yet again. That would propagate negative stereotypes.)
  • Be very vocal about positive experiences in diversity.

Diversity matters. It’s good business and it’s important ethically. Improving the diversity of events is only a small part of improving the industry, and I’d encourage all readers to think about what they can do elsewhere in their own place of work or study.

Further reading:

For conference organizers:

For new speakers:


1 Previously, my criteria have been very loosely based on:

  • Preferring events where I won’t need to stay overnight
  • Preferring events where there are other talks I’ll be interested in
  • Preferring community over commercial organizers
  • Preferring events where the focus actually seems to intersect with my area of dubious expertise. (I’m unlikely to speak at any Agile, Testing or DevOps conferences – while I can appreciate them, that’s not my area.)
  • How many other things I have going on at the time

I’m expecting this post to change over time. I don’t generally like revisionism, but I want this post to stay “live” and relevant for as long as possible. As a compromise, here’s a revision history.

  • 2016-12-10: Initial post
  • 2016-12-16: Updated structure for clarity, fixed MVDP expansion (oops), rewording around not lowering quality

Tracking down a performance hit

I’ve been following the progress of .NET Core with a lot of interest, and trying to make the Noda Time master branch keep up with it. The aim is that when Noda Time 2.0 eventually ships (apologies for the delays…) it will be compatible with .NET Core from the start. (I’d expected to be able to support netstandard1.0, but that appears to have too much missing from it. It looks like netstandard1.3 will be the actual target.)

I’ve been particularly looking forward to being able to run the Noda Time benchmarks (now using BenchmarkDotNet) to compare .NET Core on Linux with the same code on Windows. In order to make that a fair comparison, I now have two Intel NUCs, both sporting an i5-5250U and 8GB of memory.

As it happens, I haven’t got as far as running the benchmarks under .NET Core – but I am now able to run all the unit tests on both Linux and Windows, using both the net451 TFM and netcoreapp1.0.

When I did that recently, I was pretty shocked to see that (depending on which tests I ran) the tests were 6-10 times slower on Linux than on Windows, using netcoreapp1.0 in both cases. This post is a brief log of what I did to track down the problem.

Step 1: Check that there’s really a problem

Thought: Is this actually just a matter of not running the tests in a release configuration, or something similar?

Verification: I ran the tests several times, specifying -c Release on the command line to use the release build of both NodaTime.Test.dll and NodaTime.dll. Running under a debugger definitely wasn’t an issue, as this was all just done from the shell.

Additionally, I ran the tests in two ways – firstly, running the whole test suite, and secondly running with --where=cat!=Slow to avoid the few tests I’ve got which are known to be really pretty slow. They’re typically tests which compare the answers the BCL gives with the answers Noda Time gives, across the whole of history for a particular calendar system or time zone. I’m pleased to report that the bottleneck in these tests is almost always the BCL, but that doesn’t help to speed them up. If only the “slow” tests had been much slower on Linux, that might have pointed to the problems being in BCL calendar or time zone code.

The ratios vary, but there was enough of a problem under both circumstances for it to be worth looking further.

Step 2: Find a problematic test

I didn’t have very strong expectations one way or another about whether this would come down to some general problem in the JIT on Linux, or whether there might be one piece of code causing problems in some tests but not others. Knowing that there are significant differences in handling of some culture and time zone code between the Linux and Windows implementations, I wanted to find a test which used the BCL as little as possible – but which was also slow enough for the differences in timing to be pronounced and not easily explicable by the problems of measuring small amounts of time.

Fortunately, NUnit produces a TestResult.xml file which is easy to parse with LINQ to XML, so I could easily transform the results from Windows and Linux into a list of tests, ordered by duration (descending), and spot the right kind of test.

I found my answer in UmAlQuraYearMonthDayCalculatorTest.GetYearMonthDay_DaysSinceEpoch, which effectively tests the Um Al Qura calendar for self consistency, by iterating over every day in the supported time period and checking that we can convert from “days since Unix epoch” to an expected “year, month day”. In particular, this test doesn’t rely on the Windows implementation of the calendar, nor does it use any time zones, cultures or anything similar. It’s nicely self-contained.

This test took 2051ms on Linux and 295ms on Windows. It’s possible that those figures were from a debug build, but I repeated the tests using a release build and confirmed that the difference was still similar.

Step 3: Find the bottleneck

At this point, my aim was to try to remove bits of the test at a time, until the difference went away. I expected to find something quite obscure causing the difference – something like different CPU cache behaviour. I knew that the next step would be to isolate the problem to a small piece of code, but I expected that it would involve a reasonable chunk of Noda Time – at least a few types.

I was really lucky here – the first and most obvious call to remove made a big difference: the equality assertion. Assertions are usually the first thing to remove in tests, because everything else typically builds something that you use in the assertions… if you’re making a call without either using the result later or asserting something about the result, presumably you’re only interested in side effects.

As soon as I removed the call to Assert.AreEqual(expected, actual), the execution time dropped massively on Linux, but hardly moved on Windows: they were effectively on a par.

I wondered whether the problem was with the fact that I was asserting equality between custom structs, and so tried replacing the real assertions with assertions of equality of strings, then of integers. No significant difference – they all showed the same discrepancy between Windows and Linux.

Step 4: Remove Noda Time

Once I’d identified the assertions as the cause of the problem, it was trivial to start a new test project with no dependency on Noda Time, consisting of a test like this:

[Test]
public void Foo()
{
    for (int i = 0; i < 1000000; i++)
    {
        var x = 10;
        var y = 10;
        Assert.AreEqual(x, y);
    }
}

This still demonstrated the problem consistently, and allowed simpler experimentation with different assertions.

Step 5: Dig into NUnit

For once in my life, I was glad that a lot of implementation details of a framework were exposed publicly. I was able to try lots of different “bits” of asserting equality, in order to pin down the problem. Things I tried:

  • Assert.AreEqual(x, y): slow
  • Assert.That(x, Is.EqualTo(y)): slow
  • Constructing an NUnitEqualityComparer: fast
  • Calling NUnitEqualityComparer.AreEqual: fast. (Here the construction occurred before the loop, and the comparisons were in the loop.)
  • Calling Is.EqualTo(y): slow

The last bullets two bullets were surprising. I’d been tipped off that NUnitEqualityComparer uses reflection, which could easily differ in performance between Windows and Linux… but checking for equality seemed to be fast, and just constructing the constraint was slow. In poking around the NUnit source code (thank goodness for Open Source!) it’s obvious why Assert.AreEqual(x, y) and Assert.That(y, Is.EqualTo(x)) behave the same way – the former just calls the latter.

So, why is Is.EqualTo(y) slow (on Linux)? The method itself is simple – it just creates an instance of EqualConstraint. The EqualConstraint constructor body doesn’t do much… so I proved that it’s not EqualConstraint causing the problem by deriving my own constraint with a no-op implementation of ApplyTo… sure enough, just constructing that is slow.

That leaves the constructor of the Constraint abstract base class:

protected Constraint(params object[] args)
{
    Arguments = args;

    DisplayName = this.GetType().Name;
    if (DisplayName.EndsWith("`1") || DisplayName.EndsWith("`2"))
        DisplayName = DisplayName.Substring(0, DisplayName.Length - 2);
    if (DisplayName.EndsWith("Constraint"))
            DisplayName = DisplayName.Substring(0, DisplayName.Length - 10);
}

That looks innocuous enough… but maybe calling GetType().Name is expensive on Linux. So test that… nope, it’s fast.

At this point I’m beginning to wonder whether we’ll ever get to the bottom of it, but let’s just try…

[Test]
public void EndsWith()
{
    string text = "abcdefg";
    for (int i = 0; i < Iterations; i++)
    {
        text.EndsWith("123");
    }
}

… and sure enough, it’s fast on Windows and slow on Linux. Wow. Looks like we have a culprit.

Step 6: Remove NUnit

At this point, it’s relatively plain sailing. We can reproduce the issue in a simple console app. I won’t list the code here, but it’s in the GitHub issue. It just times calling EndsWith once (to get it JIT compiled) and then a million times. Is it the most rigorous benchmark in the world? Absolutely not… but when the difference is between 5.3s on Linux and 0.16s on Windows, on the same hardware, I’m not worried about inaccuracy of a few milliseconds here or there.

Step 7: File a CoreCLR issue

So, as I’ve shown, I filed a bug on GitHub. I’d like to think it was a pretty good bug report:

  • Details of the environment
  • Short but complete console app ready to copy/paste/compile/run
  • Results

Exactly the kind of thing I’d have put into a Stack Overflow question – when I ask for a minimal, complete example on Stack Overflow, this is what I mean.

Anyway, about 20 minutes later (!!!), Stephen Toub has basically worked out the nub of it: it’s a culture issue. Initially, he couldn’t reproduce it – he saw the same results on Windows and Linux. But changing his culture to en-GB, he saw what I was seeing. I then confirmed the opposite – when I ran the code having set LANG=en-US, the problem went away for me. Stephen pulled Matt Ellis in, who gave more details as to what was going wrong behind the scenes.

Step 8: File an NUnit issue

Matt Ellis suggested filing an issue against NUnit, as there’s no reason this code should be culture-sensitive. By specifying the string comparison as Ordinal, we can go through an even faster path than using the US culture. So

if (DisplayName.EndsWith("Constraint"))

becomes

if (DisplayName.EndsWith("Constraint", StringComparison.Ordinal))

… and the equivalent for the other two calls.

I pointed out in the issue that it was also a little bit odd that this was being worked out in every Constraint constructor call, when of course it’s going to give the same result for every instance of the same type. When “every Constraint constructor call” becomes “every assertion in an entire test run”, it’s a pretty performance-critical piece of code. While unit tests aren’t important in terms of performance in the same way that production code is, anything which adds friction is bad news.

Hopefully the NUnit team will apply the simple improvement for the next release, and then the CoreCLR team can attack the tougher underlying problem over time.

Step 9: Blog about it

Open up Stack Edit, start typing: “I’ve been following the progress”… :)

Conclusion

None of the steps I’ve listed here is particularly tricky. Diagnosing problems is often more a matter of determination and being unwilling to admit defeat than cleverness. (I’m not denying that there’s a certain art to being able to find the right seam to split the problem in two, admittedly.)

I hope this has been useful as a “start to finish” example of what a diagnostic session can look and feel like. It wasn’t one physical session, of course – I found bits of time to investigate it over the course of a day or so – but it would have been the same steps either way.

Smug, satisfied smile…