Category Archives: General

Imposter Syndrome (part 2)

9 days ago, I posted Imposter Syndrome (part 1) and then immediately listened to Heather Downing‘s excellent NDC talk on the topic.

This is the “reflections afterwards” post I’d expected to write (although slightly more delayed than I’d hoped for). I’m not going to try to recap Heather’s talk, because that wouldn’t do justice to it. Hopefully the Vimeo NDC channel will have a video of it at some point – I imagine the NDC folks are very busy in terms of editing, format conversion etc. This is my reflections after Heather’s talk – some of it directly responding to what she said, and other parts simply my own meandering thought process.

Responding to praise

This one’s simple. I’ve often responded to praise in ways that effectively negate the opinion of the other person – saying that I’m really not as smart as they think I am, etc. I suspect there’s some cultural influence there; it’s a fairly British thing to do. But I hadn’t considered the flip side of this not conveying humility on my part, but wrongness on their part.

So I’m going to start to just say “thank you” as far as I can. I think there will still be times where it would be important to try to correct a potentially harmful impression – if someone explains that they’re trying to win technical arguments by quoting me, for example – but most of the time I’ll try to bite my tongue, and just say thanks… and maybe try to shift the conversation onto what they’re doing. (If someone says they’re inspired by me, that’s great – so what have they been inspired to do? Does this give me an opportunity to encourage them further?)

Success and luck

There are two very slightly different nuances to being “lucky” – at least in the way I think about it. The first is a sort of “undeserved positive effects” aspect. “I’m lucky to be married to such a wonderful person” or “I’m lucky to have a natural aptitude for computing” for example. Things you can’t really control much. The second is a sort of “the same sequence of events could have unfolded very differently” aspect. “I’m lucky to have ended up in a job I love without making a career plan” for example.

I fear I’m not transferring the ideas from brain to screen very clearly, but there are two important points:

Firstly, I don’t want anyone to try to emulate me in areas where I’ve been genuinely lucky. I have no doubt that in other situations (with a different set of colleagues, for example) some of my actions could have led to very different results. I’ve always spent quite a lot of time learning by experimentation and community writing – whether that’s on newsgroups, Stack Overflow or blog posts. Some of this has been done on company time, and every company I’ve worked for has (quietly) acknowledged that it’s been a broadly positive thing – so long as it’s not been too excess, of course. Other software engineers – particularly those in jobs where every hour has to be accounted for – could see a very different result to the same actions.

On the other hand, I should probably accept the point Heather made that attributing repeated success to luck is foolish. I don’t think I’m lucky to receive upvotes on Stack Overflow: I make a conscious effort to communicate clearly, and that’s something I’ve put a lot of effort into over several years. Some of the further results could be called lucky: if Stack Overflow hadn’t come on the scene, I’m sure I’d still be writing on newsgroups with a vastly smaller potential audience for answers. The more immediate effect of “If I put effort into writing clearly and researching my subject matter, that effort is appreciated by those who read it” isn’t a matter of luck though.

Writing off success as just luck risks undervaluing processes and practices that are genuinely helpful – as well as potentially giving the impression that we won’t appreciate the hard work and diligence of others. (On the other hand, check your privilege before ascribing all your success to your own graft and/or brilliance.)

Dunning-Kruger harms everyone

Finally, the Dunning-Kruger effect is probably worth fighting against in both aspects.

Those who are overestimating their skills are doing themselves a disservice by appearing arrogant or compounding their ignorance by “meta-ignorance” of the scope of the subject matter. Unless they’re trying to represent a larger entity (a consultancy for example) the impact seems fairly localized.

I’m coming round to the idea that those who are underestimating their skills – and doing so publicly – might be discouraging everyone else. If someone I look up to as an expert in a topic were to only rate themselves as “8 out of 10” in knowledge in that topic, that could make me feel worse about my own understanding of the topic. While I suspect it’s hard for anyone in a culture that values humility to rate their knowledge as “9.5 out of 10” for something, I think it’s important that the real experts do so. Yes, they can still be aware of the areas they struggle in – but there must be some way of expressing that while acknowledging their overall expertise.

Beyond simple discouragement, there’s another aspect of underestimating your own prowess that can prove unhelpful, and that’s in terms of explanations. I’ve always found most (not quite all) security experts hard to understand. They’re so deeply immersed in their own domain that they may not appreciate how many assumptions of shared terminology and understanding they need to remove before they can communicate effectively with “lay” people.

I only give the example of security as one where I personally struggle to learn from people who undoubtedly have knowledge I could benefit from. My fear is that I do the same unwittingly when it comes to areas I’m confident in. I tend to make more conscious effort when discussing date/time issues as I’m aware of the common misunderstandings. What about C# though? When I use language specification terminology in blog posts and Stack Overflow answers, what proportion of readers just get lost quickly? I’m not quite sure what to do about this, beyond becoming more conscious of it as a possibility.

Conclusion

This is by no means an end to my thoughts on Imposter Syndrome or related self-evaluation traits, although it may well be my last blog post on it. No impressive final thoughts, no clever tying up of all the strands… this is only a conclusion in the sense that it’s concluding the post. The end.

Imposter syndrome (part 1)

Note: this is a purely personal post. It has no code in. It’s related to the coding side of my world more than the rest of who I am, so it’s in my coding blog, but if you’re looking for code, just move on.

As part of a Twitter exchange, I discovered that Heather Downing (blog, twitter) would be talking about Imposter Syndrome. This is a topic that interests me for reasons I’ll go into below. I figured it would be interesting to jot down some thoughts on it before Heather’s talk, and then again afterwards, comparing my ideas with hers. As such, I expect to publish this post pretty much as I’m sitting down for the talk, for maximum independence. (Ed: it’s not somewhat rushed. Back when I started it on Tuesday, it seemed like I had loads of time. It’s now Friday morning and I’m desperately trying to get it into some kind of coherent state in time to post…)

There are two ways I could write this post: one very abstract, about “people in general”, and one very concrete, about myself. The first approach would probably end in platitudes and ignorance – the second could well feel like a mixture of egocentricity, arrogance and humble-bragging. I’m going for the second approach anyway, so if you suspect you’ll get annoyed by reading about my thoughts about myself, I suggest moving along. (Read Heather’s blog, for example.)

Aspects of Imposter Syndrome

I think about Imposter Syndrome in three different ways. For some people they may be very similar, but in my case there are pretty radical differences. (For some reason I tend to be a corner case in all kinds of ways. Basically, I’m awkward.)

  • What do people say (and think) about your skills?
  • What skills are expected or required for what you do? (e.g. the job you’re in, success in the community, speaking etc)
  • What do you say about your skills?

I think of Imposter Syndrome as believing that your true set of skills or abilities is lower than the evaluations listed above. It’s possible that the third bullet really doesn’t belong there, but it’s sufficiently closely related that I want to talk about it anyway.

What do people say (and think) about my coding ability?

The Jon Skeet facts page is the first thing that comes to mind, followed by the Toggl “Rescue the Princess” comic. While both of those are clearly meant to be comedy rather than taken seriously, I suspect some of the hyperbole has rubbed off.

I get attention at conferences and on Twitter as if I really showed exceptional coding ability. There’s an assumption that I really can answer anything. People talk about being inspired by me. People still show up to my talks. People ask how I “get so much done” – when I see plenty of people achieving much more than I do. (I slump in front of the TV at night with Holly far more than the question would suggest…)

What skills are expected of me?

Back in 2012, I talked with Scott Hanselman about Imposter Syndrome and “being a phony”. Back then, I still felt like an imposter at Google – and knew that plenty of my colleagues felt the same way.

In my job, I’m expected to be a proficient coder and leader in the area that I’m working on. I was briefly a manager too, but I’m not any more – so my role is fairly purely technical… but that still includes so-called “soft skills” in terms of communication and persuasion. (I hate the term “soft skills” as it implies those skills are less important or difficult. They’re critical, and sadly underdeveloped!)

In the community, I’m expected to be prolific and accurate online, and interesting/engaging in person, particularly while presenting.

What do I say and think about myself?

I try to make the “say” and “think” match. For some definitions of Imposter Syndrome, I don’t think I actually suffer from it at all. In particular:

  • The hyperbole is clearly incorrect. It’s not just fake humility that suggests I’m not really the world’s top programmer… the idea that I could possibly believe that is laughable.
  • These days I’m pretty comfortable with what I do at work. I work hard, I’m working in an area where I feel I have expertise (C# API design) and I get things done. The work I do doesn’t involve the same degree of computer science brilliance as designing Spanner or implementing a self-driving car, but it’s far from trivial.
  • There are thing I’ve done that I’m genuinely proud of beyond my day job – in particular, Noda Time and C# in Depth. I take pride in my Stack Overflow answers too, but they’re slightly different in a way that’s hard to explain. I’m certainly pleased that they’re helpful.
  • I’m confident in my boundaries: I know that I know C# very well and Java pretty well. I know that I have more awareness of date/time issues than the vast majority of developers. I know that I can express ideas clearly, and that that’s important. I’m also well aware of my limitations: if you see any code I write outside Java and C# (e.g. Bash, Python, Javascript) then it’s horrible, and I make no claims otherwise.

Talking about being an “imposter” or “phony” suggests making a claim to competence which is untrue. I don’t think that’s the case here – and that applies to the vast majority of other “famous” developers I know. They’re generally well aware of their limitations too, and their presentations are always about the technology rather than about themselves. There are exceptions to this, and I know my “Abusing C#” talk has sometimes been seen as a self-promotion vehicle instead of the gleeful exploration of C# corner cases it’s intended to be… but in general, I haven’t interacted with many big egos in the tech space. (This may be a matter of the conferences I’ve chosen to go to. I’m aware there are plenty of big-ego jerks around, but I haven’t spoken with many of them…)

Conclusion

I still believe there is a disconnect between even people’s genuine expectations (as opposed to the hyperbole) and the reality of my competence, even though I don’t cultivate those expectations. As a mark of this, I believe my talks are more popular in anticipation than in experience – it’s often a full house, but in the green/yellow/red appraisal afterwards there’s usually a bunch of yellows and even some reds.

Obviously the disconnect gives an ego boost which I try to dampen, but it has genuinely positive aspects too: one of the things people say to or about me is that I inspire them. That’s fantastic. It really doesn’t matter whether they’re buying into a myth: if something they see in me inspires them to “do better” (whatever that may mean for them) then that’s a net benefit to the world, right?

I’m going to keep making it perfectly clear to people that a lot of what is said about me is massively overblown, while keeping confidence in myself as a really pretty decent developer. Am I over-recognized/over-hyped? Yes. Am I an imposter? I don’t think so.

Postscript

Since finishing the above conclusion, I’ve just watched Felienne‘s talk on “Programming is writing is programming” which was the best talk I’ve seen at any conference. Now I feel like an imposter…

Diversity and speaking engagements

Background

I’m in the privileged position of receiving more invitations to speak (at conferences, user groups and podcasts) than I can realistically agree to. I’ve decided to start applying some new criteria to how I pick which ones I go to1.

However, over the last couple of years as feminism has become an increasingly important part of my life I’ve found myself saddened by the lack of diversity at conferences, both in terms of speakers and attendees. It’s not uncommon for me to spend the first couple of minutes of a conference talk commenting on this, and asking the audience (broadly white men) to think about what they can do to improve this, understanding that it’s our problem to fix. I don’t know whether that’s had any impact, but I’m likely to keep doing it anyway. (Drip, drip, drip.)

I should point out that some conferences do pretty well. When I was invited to speak at NorDevCon for the second time, a large part of why I accepted was because of the diversity of both speakers and attendees. (It varies by year, of course.) When I recently spoke at Web Summit the attendee gender diversity was the best I’ve ever seen – along with a Women in Tech lounge that was certainly busy.

Anyway, to do my part in encouraging diversity, from now on when I’m invited to speak, I’m going to refer the organizers to this post.

My requirements for speaking engagements

  • Conferences must have a published Code of Conduct, including incident resolution steps. Where possible, this should be highlighted in opening remarks (typically before the keynote). It’s important that all speakers and attendees feel both safe and welcome – and members of under-represented groups are the most likely not to feel safe and welcome.
  • Organizers must take active steps to encourage speaker diversity. One common challenge to diversity initiatives is that they mean compromising on quality, but I disagree with the assumption behind the challenge. There are many high-quality presenters who are women, but it may mean making more effort to find them. (It’s all too easy to rely on the “regulars” in the tech speaking circles.) If an organizer publishes how they’re trying to encourage diversity, that’s definitely a bonus. I’d at least expect organizers to keep track of how they’re doing over time, and be willing to privately share how they’re trying to improve. It’s hard to give concrete limits here as I may need to make a decision before the rest of the speaker list is decided, but any time I find myself at a conference where 25% or less of the speakers are non-white-men, I’ll be vocally disappointed. Over time, I expect this number to get higher.
  • Ideally, publishing data on attendee diversity over time, with a public plan for improvements. This may not always be possible historically, as the data may not have been captured – but I doubt that it’s very hard to add it to future registration processes. (I’d encourage organizers to think beyond binary gender identification when adding this, too.)
  • I won’t personally speak in any white-male-only panels of three people or more. Ideally, I’d like to see efforts for there not to be any such panels.

If conferences and user groups don’t want to make any efforts to improve diversity, that’s their choice – but I hope that they’ll find it increasingly difficult to attract good speakers, and I’m going to be a tiny part of that scarcity.

How I’m happy to help organizers

On a positive side, I’m happy to:

  • Try to help organizers find diverse speakers. I don’t currently have much in the way of a contact list on this front yet, but that’s something for me to try to improve.
  • Help potential speakers tune their abstracts or presentations in private. I know that presenting for the first time can be daunting, particularly if you feel under-represented within the industry to start with. I don’t have any experience on this sort of coaching, but if I can be helpful at all, I’ll do my best.
  • Co-present with someone who might otherwise worry that they wouldn’t get much attendance, etc. In particular, I’d be very happy to be an on-stage guinea-pig, learning from another presenter in a field I’m not familiar with, and asking questions along the way in an active tutorial style. (I’d expect any partnership like this to be primarily about highlighting the other speaker’s knowledge – it mustn’t be tokenism just to get them on stage while I waffle about C# yet again. That would propagate negative stereotypes.)
  • Be very vocal about positive experiences in diversity.

Diversity matters. It’s good business and it’s important ethically. Improving the diversity of events is only a small part of improving the industry, and I’d encourage all readers to think about what they can do elsewhere in their own place of work or study.

Further reading:

For conference organizers:

For new speakers:


1 Previously, my criteria have been very loosely based on:

  • Preferring events where I won’t need to stay overnight
  • Preferring events where there are other talks I’ll be interested in
  • Preferring community over commercial organizers
  • Preferring events where the focus actually seems to intersect with my area of dubious expertise. (I’m unlikely to speak at any Agile, Testing or DevOps conferences – while I can appreciate them, that’s not my area.)
  • How many other things I have going on at the time

I’m expecting this post to change over time. I don’t generally like revisionism, but I want this post to stay “live” and relevant for as long as possible. As a compromise, here’s a revision history.

  • 2016-12-10: Initial post
  • 2016-12-16: Updated structure for clarity, fixed MVDP expansion (oops), rewording around not lowering quality

Tracking down a performance hit

I’ve been following the progress of .NET Core with a lot of interest, and trying to make the Noda Time master branch keep up with it. The aim is that when Noda Time 2.0 eventually ships (apologies for the delays…) it will be compatible with .NET Core from the start. (I’d expected to be able to support netstandard1.0, but that appears to have too much missing from it. It looks like netstandard1.3 will be the actual target.)

I’ve been particularly looking forward to being able to run the Noda Time benchmarks (now using BenchmarkDotNet) to compare .NET Core on Linux with the same code on Windows. In order to make that a fair comparison, I now have two Intel NUCs, both sporting an i5-5250U and 8GB of memory.

As it happens, I haven’t got as far as running the benchmarks under .NET Core – but I am now able to run all the unit tests on both Linux and Windows, using both the net451 TFM and netcoreapp1.0.

When I did that recently, I was pretty shocked to see that (depending on which tests I ran) the tests were 6-10 times slower on Linux than on Windows, using netcoreapp1.0 in both cases. This post is a brief log of what I did to track down the problem.

Step 1: Check that there’s really a problem

Thought: Is this actually just a matter of not running the tests in a release configuration, or something similar?

Verification: I ran the tests several times, specifying -c Release on the command line to use the release build of both NodaTime.Test.dll and NodaTime.dll. Running under a debugger definitely wasn’t an issue, as this was all just done from the shell.

Additionally, I ran the tests in two ways – firstly, running the whole test suite, and secondly running with --where=cat!=Slow to avoid the few tests I’ve got which are known to be really pretty slow. They’re typically tests which compare the answers the BCL gives with the answers Noda Time gives, across the whole of history for a particular calendar system or time zone. I’m pleased to report that the bottleneck in these tests is almost always the BCL, but that doesn’t help to speed them up. If only the “slow” tests had been much slower on Linux, that might have pointed to the problems being in BCL calendar or time zone code.

The ratios vary, but there was enough of a problem under both circumstances for it to be worth looking further.

Step 2: Find a problematic test

I didn’t have very strong expectations one way or another about whether this would come down to some general problem in the JIT on Linux, or whether there might be one piece of code causing problems in some tests but not others. Knowing that there are significant differences in handling of some culture and time zone code between the Linux and Windows implementations, I wanted to find a test which used the BCL as little as possible – but which was also slow enough for the differences in timing to be pronounced and not easily explicable by the problems of measuring small amounts of time.

Fortunately, NUnit produces a TestResult.xml file which is easy to parse with LINQ to XML, so I could easily transform the results from Windows and Linux into a list of tests, ordered by duration (descending), and spot the right kind of test.

I found my answer in UmAlQuraYearMonthDayCalculatorTest.GetYearMonthDay_DaysSinceEpoch, which effectively tests the Um Al Qura calendar for self consistency, by iterating over every day in the supported time period and checking that we can convert from “days since Unix epoch” to an expected “year, month day”. In particular, this test doesn’t rely on the Windows implementation of the calendar, nor does it use any time zones, cultures or anything similar. It’s nicely self-contained.

This test took 2051ms on Linux and 295ms on Windows. It’s possible that those figures were from a debug build, but I repeated the tests using a release build and confirmed that the difference was still similar.

Step 3: Find the bottleneck

At this point, my aim was to try to remove bits of the test at a time, until the difference went away. I expected to find something quite obscure causing the difference – something like different CPU cache behaviour. I knew that the next step would be to isolate the problem to a small piece of code, but I expected that it would involve a reasonable chunk of Noda Time – at least a few types.

I was really lucky here – the first and most obvious call to remove made a big difference: the equality assertion. Assertions are usually the first thing to remove in tests, because everything else typically builds something that you use in the assertions… if you’re making a call without either using the result later or asserting something about the result, presumably you’re only interested in side effects.

As soon as I removed the call to Assert.AreEqual(expected, actual), the execution time dropped massively on Linux, but hardly moved on Windows: they were effectively on a par.

I wondered whether the problem was with the fact that I was asserting equality between custom structs, and so tried replacing the real assertions with assertions of equality of strings, then of integers. No significant difference – they all showed the same discrepancy between Windows and Linux.

Step 4: Remove Noda Time

Once I’d identified the assertions as the cause of the problem, it was trivial to start a new test project with no dependency on Noda Time, consisting of a test like this:

[Test]
public void Foo()
{
    for (int i = 0; i < 1000000; i++)
    {
        var x = 10;
        var y = 10;
        Assert.AreEqual(x, y);
    }
}

This still demonstrated the problem consistently, and allowed simpler experimentation with different assertions.

Step 5: Dig into NUnit

For once in my life, I was glad that a lot of implementation details of a framework were exposed publicly. I was able to try lots of different “bits” of asserting equality, in order to pin down the problem. Things I tried:

  • Assert.AreEqual(x, y): slow
  • Assert.That(x, Is.EqualTo(y)): slow
  • Constructing an NUnitEqualityComparer: fast
  • Calling NUnitEqualityComparer.AreEqual: fast. (Here the construction occurred before the loop, and the comparisons were in the loop.)
  • Calling Is.EqualTo(y): slow

The last bullets two bullets were surprising. I’d been tipped off that NUnitEqualityComparer uses reflection, which could easily differ in performance between Windows and Linux… but checking for equality seemed to be fast, and just constructing the constraint was slow. In poking around the NUnit source code (thank goodness for Open Source!) it’s obvious why Assert.AreEqual(x, y) and Assert.That(y, Is.EqualTo(x)) behave the same way – the former just calls the latter.

So, why is Is.EqualTo(y) slow (on Linux)? The method itself is simple – it just creates an instance of EqualConstraint. The EqualConstraint constructor body doesn’t do much… so I proved that it’s not EqualConstraint causing the problem by deriving my own constraint with a no-op implementation of ApplyTo… sure enough, just constructing that is slow.

That leaves the constructor of the Constraint abstract base class:

protected Constraint(params object[] args)
{
    Arguments = args;

    DisplayName = this.GetType().Name;
    if (DisplayName.EndsWith("`1") || DisplayName.EndsWith("`2"))
        DisplayName = DisplayName.Substring(0, DisplayName.Length - 2);
    if (DisplayName.EndsWith("Constraint"))
            DisplayName = DisplayName.Substring(0, DisplayName.Length - 10);
}

That looks innocuous enough… but maybe calling GetType().Name is expensive on Linux. So test that… nope, it’s fast.

At this point I’m beginning to wonder whether we’ll ever get to the bottom of it, but let’s just try…

[Test]
public void EndsWith()
{
    string text = "abcdefg";
    for (int i = 0; i < Iterations; i++)
    {
        text.EndsWith("123");
    }
}

… and sure enough, it’s fast on Windows and slow on Linux. Wow. Looks like we have a culprit.

Step 6: Remove NUnit

At this point, it’s relatively plain sailing. We can reproduce the issue in a simple console app. I won’t list the code here, but it’s in the GitHub issue. It just times calling EndsWith once (to get it JIT compiled) and then a million times. Is it the most rigorous benchmark in the world? Absolutely not… but when the difference is between 5.3s on Linux and 0.16s on Windows, on the same hardware, I’m not worried about inaccuracy of a few milliseconds here or there.

Step 7: File a CoreCLR issue

So, as I’ve shown, I filed a bug on GitHub. I’d like to think it was a pretty good bug report:

  • Details of the environment
  • Short but complete console app ready to copy/paste/compile/run
  • Results

Exactly the kind of thing I’d have put into a Stack Overflow question – when I ask for a minimal, complete example on Stack Overflow, this is what I mean.

Anyway, about 20 minutes later (!!!), Stephen Toub has basically worked out the nub of it: it’s a culture issue. Initially, he couldn’t reproduce it – he saw the same results on Windows and Linux. But changing his culture to en-GB, he saw what I was seeing. I then confirmed the opposite – when I ran the code having set LANG=en-US, the problem went away for me. Stephen pulled Matt Ellis in, who gave more details as to what was going wrong behind the scenes.

Step 8: File an NUnit issue

Matt Ellis suggested filing an issue against NUnit, as there’s no reason this code should be culture-sensitive. By specifying the string comparison as Ordinal, we can go through an even faster path than using the US culture. So

if (DisplayName.EndsWith("Constraint"))

becomes

if (DisplayName.EndsWith("Constraint", StringComparison.Ordinal))

… and the equivalent for the other two calls.

I pointed out in the issue that it was also a little bit odd that this was being worked out in every Constraint constructor call, when of course it’s going to give the same result for every instance of the same type. When “every Constraint constructor call” becomes “every assertion in an entire test run”, it’s a pretty performance-critical piece of code. While unit tests aren’t important in terms of performance in the same way that production code is, anything which adds friction is bad news.

Hopefully the NUnit team will apply the simple improvement for the next release, and then the CoreCLR team can attack the tougher underlying problem over time.

Step 9: Blog about it

Open up Stack Edit, start typing: “I’ve been following the progress”… :)

Conclusion

None of the steps I’ve listed here is particularly tricky. Diagnosing problems is often more a matter of determination and being unwilling to admit defeat than cleverness. (I’m not denying that there’s a certain art to being able to find the right seam to split the problem in two, admittedly.)

I hope this has been useful as a “start to finish” example of what a diagnostic session can look and feel like. It wasn’t one physical session, of course – I found bits of time to investigate it over the course of a day or so – but it would have been the same steps either way.

Smug, satisfied smile…

Common mistakes in date/time formatting and parsing

There are many, many questions on Stack Overflow about both parsing and formatting date/time values. (I use the term “date/time” to mean pretty much “any type of chronlogical information” – dates, times of day, instants in time etc.) Given how often the same kinds of mistakes are made, I thought it would be handy to have a blog post to refer to.

This post assumes you already know the basic operations of formatting and parsing, in terms of the appropriate types to use:

Pattern woes

There are three broad classes of issue here – one of which is “just” a matter of carelessness, usually, and the other which still surprises me in terms of sheer wrongness.

Pattern capitalization issues

This is an insidious problem, because in some cases you may get the right values, but not all of the time. I suspect it usually comes up again due to copy and paste, but often from specifications rather than other code – in a specification, it’s pretty clear what "YYYY-MM-DD HH:MM:SS" means as a date/time format, but that doesn’t mean it’s the right pattern to put in code.

The main thing to do is read the documentation carefully. Of course, some platforms have clearer documentation than others, but most are at least “good enough”. For the Java APIs, the pattern specifiers are generally documented with the formatting classes themselves; for .NET’s built-in classes you want the custom date and time format strings and standard date and time format strings MSDN pages, and for Noda Time follow the various options from the text handling part of the user guide. (For other platforms, use your common sense. :)

The most common mistakes here are:

  • Using mm for months or MM for minutes, rather than vice versa. I’ve seen this mistake both ways round.
  • Using hh for “hour of day” when HH is intended. H is in the range 0-23; h is in the range 1-12. h is usually used singly (rather than requiring exactly two digits), and almost always in conjunction with an AM/PM specifier – as otherwise it’s ambiguous. H is usually used as HH, so that 5am is represented as “05” for example.
  • Using YYYY for year – in Java and Noda Time, Y is used for week-year rather than normal calendar year; it’s usually used in conjunction with “week of year” and “day of week”, but it’s much less common than yyyy.
  • Using DD for “day of month” when in Java it actually means “day of year”.

Broad pattern incompatibilities

I’m surprised by how often I see code like this:

var text = "Tue, 5 May 2015 3:15pm";
var dateTime = DateTime.ParseExact(
    text,
    "yyyy-MM-dd'T'HH:mm:ss");

Here the pattern and the actual data are entirely different, and I get the impression that the author has copied the pattern from another piece of code without any thought about what the magic string "yyyy-MM-dd'T'HH:mm:ss" is there for.

I suspect it goes without saying for most readers, but you should never copy code from elsewhere into your own code without understanding how it works, or which parts you may potentially need to modify.

The result of this sort of error is usually a complete failure to parse, which is at least simpler to find than the “plausible but not quite correct” pattern issue.

Pattern incompatibility issues

Some developers assume that a pattern which works in Java will work in Python, or the equivalent for any other pair of platforms. Don’t make this assumption. Always read the documentation – and if you’re porting code from one platform to another, you’ll need to “decode” the pattern with one set of documentation, then “encode” it with the other.

Time zone issues

Understanding time zones

There are two common issues when understanding what a time zone is to start with.

The first is to assume that a UTC offset (e.g. “+8 hours”) is the same as a time zone. This is an understandable mistake, given that a lot of documentation (from organizations which really should know better) misuse the terminology. The UTC offset is the difference between UTC and local time at a particular instant – so for example, while I’m writing this, I’m in the UK time zone which is currently at UTC+1. However, in the winter (in the same time zone) it will be at UTC+0. So if you have a value of (say) “2015-05-10T16:43:00+0100” that only tells you the UTC offset – it doesn’t tell you the time zone. There may well be multiple time zones with the same offset at that particular time, but which will have different offsets at differ times.

The second mistake is to think that an abbreviation such as “EST” or “GMT” identifies a time zone. It doesn’t, in two ways:

  • A single time zone often uses multiple abbreviations over time. For example, “Pacific Time” varies between PST (Pacific Standard Time) and PDT (Pacific Daylight Time). It’s unfortunate that some people use the abbreviation for standard time even when they mean the general time zone – so even though currently (at the time of writing) Pacific Time is in PDT (UTC-7), some people would write the local time with “PST” at the end. Grr. Avoid abbrevations if you possibly can.
  • The same abbreviation may be used in multiple time zones, or even at different points in time to mean different things within the same time zone. For example, “BST” can mean British Summer Time in Europe/London (standard time of UTC+0, plus 1 hour of daylight saving time), British Standard Time in Europe/London (standard time of UTC+1, with no daylight saving time, around 1970 only) and Bougainville Standard Time in Pacific/Bougainville (UTC+11). Avoid abbreviations if you possibly can.

Using time zones in text formatting/parsing

First, you need to understand exactly what the library you’re using does with time zones, and what the types you’re using represent. One of the most common misconceptions here is with java.util.Date – this is just an instant in time, with no concept of a time zone or calendar system. The fact that the string returned from Date.toString always uses the system default time zone is unfortunately misleading in this respect, and causes developers to ask how to “convert” a Date from one time zone to another.

Next, you need to understand exactly what your data represents. In my experience, most textual data either specifies a date and/or time without a given time zone or it specifies a date and time with a UTC offset. When no time zone information is present, you may know the time zone it’s meant to refer to, or you may not. If you’re using a library which has multiple different types to represent different kinds of information (e.g. Joda Time, java.time or Noda Time) I personally find it clearest to parse to a type that closest represents the information actually stated in the text, and then convert it to something else where appropriate.

You definitely need to be aware when the parsing operation is going to impose any sort of time zone understanding on your data. This is the case with SimpleDateFormat in Java and with DateTime.ParseExact and friends in .NET. For SimpleDateFormat, unless you explicitly set a time zone (or the pattern includes a UTC offset), the system default time zone is used – this is usually not what you want. Parsing in .NET allows you to specify how you want the text to be understood, but you need to be careful. (The fact that DateTime sometimes represents a value in the system default time zone, sometimes a value in UTC, and sometimes a value with no associated time zone makes this all tricky.)

Locale / culture issues

Most libraries allow you to specify which culture to use when parsing (or formatting) data. This is a two-edged sword:

  • If you’re formatting a value to be displayed directly to an end user, that’s great: they can see the month name in their own language, etc. In this situation, you’ll typically use a “standard” format (e.g. “the short date/time format”)
  • If you’re formatting or parsing a value which is designed to be machine-readable (e.g. passed to a web service) then you almost certainly want the invariant culture instead of a user-specific culture. In this situation, you’ll typically use a “custom” format (e.g. “yyyy-MM-dd’T’HH:mm:ss”) or a specific culture-invariant format.

Culture can affect several aspects of handling conversions:

  • The calendar system used (e.g. the Gregorian calendar vs an Islamic calendar)
  • The “standard” formats used (e.g. month/day/year vs day/month/year)
  • The separators used (e.g. - vs / for date separators)
  • The month and day names used
  • The number system used

Converting unnecessarily

As a final common problem, you may be performing more conversions than you should be. For example, if you’ve got a DateTime field in the database but you’re passing a value as a string in your SQL parameter (you are using parameterized SQL, right?) then you probably shouldn’t be. Most platforms allow parameters to be specified as the value in a “native” representation. Likewise when you fetch a value, don’t just call toString on it and then parse the result – if the value is a date/time value, it should already be in a native representation; a simple cast (or call to the type-specific method) should be enough.

Conclusion

Date/time text handling is fraught with problems, as a simple look at Stack Overflow shows. Be careful, make sure you know exactly what you’re converting from and to, and check exactly what you’re specifying vs what you’re leaving implicit.

Backward compatibility pain

I’ve been getting a bit cross about backward compatibility recently. This post contains two examples of backward incompatibilities in .NET 4.6, and one example of broken code which isn’t being fixed due for backward compatibility reasons.

Let me start off by saying this post is not meant to be seen as an attack on Microsoft. I suspect that some people will read that first sentence as “This post is an attack on Microsoft, but I don’t want to say so.” It really isn’t. With my naturally positive disposition, I’m going to assume that the people behind the code and decisions in the examples below are smart people who have more information than I do. Their decisions may prove annoying to me personally, but that doesn’t mean those decisions are bad ones for the world at large.

The purpose of this post is partly just because I think readers will find it interesting, and partly to show how there are different things to consider when it comes to backward compatibility in APIs.

Example 1: Enumerable.OrderByDescending is broken

OrderByDescending is broken when three facts are combined unfortunately:

  • IComparer.Compare is allowed to return any integer value, including int.MinValue. The return value is effectively meant to represent one of three results:
    • the first argument is “earlier than” the second (return a negative integer)
    • the two arguments are equal in terms of this comparison (return 0)
    • the first argument is “later than” the second (return a positive integer)
  • -int.MinValue (the unary negation of int.MinValue) is still int.MinValue, because the “natural” result would be outside the range of int. (Think about sbyte as being in the range -128 to 127 inclusive… what would -(-128) in sbyte arithmetic return?)
  • OrderByDescending uses unary negation to attempt to reverse the order returned by an “ascending” comparer.

I’ve blogged about this before, but for the sake of completeness, here’s an example showing that it’s broken. We use a custom comparer which delegates to a normal string comparer – but only ever returns int.MinValue, 0 or int.MaxValue. Just to reiterate, this is an entirely legitimate comparer.

using System;
using System.Collections.Generic;
using System.Linq;

class OrderByDescendingBug
{
    static void Main()
    {
        var comparer = new MaximalComparer<string>(Comparer<string>.Default);
        string[] input = { "apples", "carrots", "dougnuts", "bananas" };

        var sorted = input.OrderByDescending(x => x, comparer);
        Console.WriteLine(string.Join(", ", sorted));
    }
}

class MaximalComparer<T> : IComparer<T>
{
    private readonly IComparer<T> original;

    public MaximalComparer(IComparer<T> original)
    {
        this.original = original;
    }

    public int Compare(T first, T second)
    {
        int originalResult = original.Compare(first, second);
        return originalResult == 0 ? 0
            : originalResult < 0 ? int.MinValue
            : int.MaxValue;
    }
}

We would like the result of this program to be “doughnuts, carrots, bananas, apples” but on my machine (using .NET 4.6 from VS2015 CTP6) it’s “carrots, dougnuts, apples, bananas”.

Naturally, when I first discovered this bug, I filed it in Connect. Unfortunately, the bug has been marked as closed. This comment was logged in 2011:

Swapping arguments in the call to comparer.Compare as you point out would be the most straightforward and general way to support this. However, while well-behaved implementations of comparer.Compare should handle this fine, there may be some implementations out there with subtle bugs that are not expecting us to reverse the order in which we supply these arguments for a given list. Given our focus on runtime compatibility for the next release, we won’t be able to fix this bug in the next version of Visual Studio, though we’ll definitely keep this in mind for a future release!

Fast, backward compatible, correct – pick any two

The clean solution here – reversing the order of the arguments – isn’t the only way of correcting it. We could use:

return -Math.Sign(original.Compare(x, y));

This still uses unary negation, but now it’s okay, because Math.Sign will only return -1, 0 or 1. It’s very slightly more expensive than the clean solution, of course – there’s the call to Math.Sign and the unary negation. Still, at least it works.

What I object to here is the pandering to incorrect code (implementations of IComparer which don’t obey its contract, by making assumptions about the order in which values will be passed) at the expense of correct code (such as the example above; the use of int.MinValue is forced here, but it can crop up naturally too – in a far harder-to-reproduce way, of course). While I can (unfortunately) believe that there are implementations which really are that broken, I don’t think the rest of us should have to suffer for it. I don’t think we should have to suffer at all, but I’d rather suffer a slight piece of inefficiency (the additional Math.Sign call (which may well be JIT-compiled into a single machine instruction – I haven’t checked) than suffer the current correctness issue.

Example 2: TimeZoneInfo becomes smarter in .NET 4.6

A long time ago, Windows time zones had no historical information – they were just a single pair of rules about when the zone started and stopped observing daylight saving time (assuming it did at all).

That improved over time, so that a time zone had a set of adjustment rules, each of which would be in force for a certain portion of history. This made it possible to represent the results of the Energy Policy Act of 2005 for example. These are represented in .NET using TimeZoneInfo.AdjustmentRule, which is slightly under-documented and has suffered from some implementation issues in the past. (There’s also the matter of the data used, but I treat that as a different issue.)

Bugs aside, the properties of TimeZoneInfo and its adjustment rules allowed an interested developer (one wanting to expose the same information in a different form for a better date/time API, as one entirely arbitrary example) to predict the results of the calculations within TimeZoneInfo itself – so the value returned by a call to TimeZoneInfo.GetUtcOffset(DateTime) could be predicted by looking at the standard UTC offset of the time zone, working out which rule was in effect for the specified DateTime, working out if that rule means that DST was being observed at the time, and adjusting the result accordingly.

As of .NET 4.6, it appears this will no longer be the case – not in a straightforward manner. One aspect of inflexibility in TimeZoneInfo is being eliminated: the inability to change standard offset. In the past, if a time zone changed its understanding of “standard time” (as the UK did between 1968 and 1971, for example), that couldn’t be cleanly represented in the TimeZoneInfo data model, leading to some very odd data with “backward” DST offsets to model the situation as nearly as possible.

Now, it seems that each adjustment rule also “knows” the difference between its standard offset and that of the zone as a whole. For the most part, this is a good thing. However, it’s a pain for anyone who works with TimeZoneInfo.AdjustmentRule directly, as the information simply isn’t available on the rule. (This is only a CTP of .NET 4.6, of course – it might become available before the final release.)

Fortunately, one can infer the information by asking the zone itself for the UTC offset of one arbitrary point in the adjustment rule, and then compare that with what you’d predict using just the properties of TimeZoneInfo and AdjustmentRule (taking into account the fact that the start of the rule may already be in DST). So long as the rule performs its internal calculations correctly (and from what I’ve seen, it’s now a lot better than it used to be, though not quite perfect yet) we can predict the result of GetUtcOffset for all other DateTime values.

It’s not clear to me why the information isn’t just exposed with a new property on the rule, however. It’s a change in terms of what’s available, sure – but anyone using the new implementation directly would have to know about the change anyway, as the results of using the exposed data no longer match the results of GetUtcOffset.

Example 3: PersianCalendar and leap years

If you thought the previous two examples were obscure, you’ll love this one.

PersianCalendar is changing in .NET 4.6 to use a more complicated leap year formula. The currently documented formula is:

A leap year is a year that, when divided by 33, has a remainder of 1, 5, 9, 13, 17, 22, 26, or 30.

So new PersianCalendar().IsLeapYear(1) has always returned true – until now. It turns out that Windows 10 is going to support the Persian Calendar (also known as the Solar Hijri calendar) in certain locales, and it’s going to do so “properly” – by which I mean, with a more complex leap year computation. This is what’s known as the “astronomical” Persian calendar and it follows the algorithm described in Calendrical Calculations. The BCL implementation is going to be consistent with that Windows calendar, which makes sense.

It’s worth noting that this calendar has the same leap year pattern as the “simple” one for over 320 years around the modern era (Gregorian 1800 to Gregorian 2123) so it’s really only people dealing with long-past dates in the Persian calendar who will notice the difference. Of course, I noticed because I have a unit test checking that my implementation of the Persian calendar in Noda Time is equivalent to the BCL’s implementation. It was fine until I installed Visual Studio 2015 CTP6…

As it happens, there’s another Persian calendar to consider – the “arithmetic” or “algorithmic” Persian calendar proposed by Ahmad Birashk. This consists of three hierarchical cycles of years (either cycles, subcycles and sub-subcycles or cycles, grand cycles and great grand cycles depending on whether you start with the biggest kind and work in, or start at the smallest and work out).

For Noda Time 2.0, I’m now going to support all three forms: simple (for those who’d like compatibility with the “old” BCL implementation), astronomical and arithmetic.

Conclusion

Backwards compatibility is hard. In all of these cases there are reasons for the brokenness, whether that’s just brokenness against correctness as in the first case, or a change in behaviour as in the other two. I think for localization-related code, there’s probably a greater tolerance of change – or there should be, as localization data changes reasonably frequently.

For the second and third cases, I think it’s reasonable to say that compatibility has been broken in a reasonable cause – particular for the second case, where the time zone data can be much more sensible with the additional flexibility of changing the UTC offset of standard time over history. It’s just a shame there’s fall-out.

The changes I would make if I were the only developer in the world would be:

  • Fix the first case either by ignoring broken comparer implementations, or by taking the hit of calling Math.Sign.
  • Improve the second case by adding a new property to AdjustmentRule and publicising its existence in large, friendly letters.
  • Introduce a new class for the third case instead of modifying the behaviour of the existing class. That would certainly be best for me – but for most users, that would probably introduce more problems than it solved. (I suspect that most users of the Persian calendar won’t go outside the “safe” range where the simple and astronomical calendars are the same anyway.)

One of the joys of working on Noda Time 2.0 at the moment is that it’s a new major version and I am willing to break 1.x code… not gratuitously, of course, but where there’s good reason. Even so, there are painful choices to be made in some cases, where there’s a balance between a slight ongoing smell or a clean break that might cause issues for some users if they’re not careful. I can only imagine what the pain is like when maintaining a large and mature codebase like the BCL – or the Windows API itself.

New blog hosting

As some of you have noticed (and let me know), my old blog hosting provider recently moved off Community Server to WordPress. I figured that as all the links we being broken anyway, now would be a good time to move off msmvps.com anyway. The old posts are still there, but my blog’s new home is codeblog.jonskeet.uk. Hopefully I’ve fixed (almost) all the internal links from one blog post to another, and Nick Craver has generously agreed to fix up links on Stack Overflow, too. I’ll fix up my web site references when I get the chance, and hopefully things will get back to (mostly) normal as soon as possible. Obviously there’ll be plenty of links elsewhere around the web which I can’t fix, but I suspect I’m my own primary consumer, so to speak.

There are still bound to be teething issues, commenting problems, goodness knows what – but hopefully the blog itself will be in a better state than it was before, overall.

Additionally, I’m hoping to gradually (very gradually) coalesce my online presence around the jonskeet.uk domain. I haven’t set that up at all yet, but that’s the plan.

Apologies for link breakage, and fingers crossed it’ll be relatively smooth sailing from here on.