Common mistakes in date/time formatting and parsing

There are many, many questions on Stack Overflow about both parsing and formatting date/time values. (I use the term “date/time” to mean pretty much “any type of chronlogical information” – dates, times of day, instants in time etc.) Given how often the same kinds of mistakes are made, I thought it would be handy to have a blog post to refer to.

This post assumes you already know the basic operations of formatting and parsing, in terms of the appropriate types to use:

Pattern woes

There are three broad classes of issue here – one of which is “just” a matter of carelessness, usually, and the other which still surprises me in terms of sheer wrongness.

Pattern capitalization issues

This is an insidious problem, because in some cases you may get the right values, but not all of the time. I suspect it usually comes up again due to copy and paste, but often from specifications rather than other code – in a specification, it’s pretty clear what "YYYY-MM-DD HH:MM:SS" means as a date/time format, but that doesn’t mean it’s the right pattern to put in code.

The main thing to do is read the documentation carefully. Of course, some platforms have clearer documentation than others, but most are at least “good enough”. For the Java APIs, the pattern specifiers are generally documented with the formatting classes themselves; for .NET’s built-in classes you want the custom date and time format strings and standard date and time format strings MSDN pages, and for Noda Time follow the various options from the text handling part of the user guide. (For other platforms, use your common sense. :)

The most common mistakes here are:

  • Using mm for months or MM for minutes, rather than vice versa. I’ve seen this mistake both ways round.
  • Using hh for “hour of day” when HH is intended. H is in the range 0-23; h is in the range 1-12. h is usually used singly (rather than requiring exactly two digits), and almost always in conjunction with an AM/PM specifier – as otherwise it’s ambiguous. H is usually used as HH, so that 5am is represented as “05” for example.
  • Using YYYY for year – in Java and Noda Time, Y is used for week-year rather than normal calendar year; it’s usually used in conjunction with “week of year” and “day of week”, but it’s much less common than yyyy.
  • Using DD for “day of month” when in Java it actually means “day of year”.

Broad pattern incompatibilities

I’m surprised by how often I see code like this:

var text = "Tue, 5 May 2015 3:15pm";
var dateTime = DateTime.ParseExact(
    text,
    "yyyy-MM-dd'T'HH:mm:ss");

Here the pattern and the actual data are entirely different, and I get the impression that the author has copied the pattern from another piece of code without any thought about what the magic string "yyyy-MM-dd'T'HH:mm:ss" is there for.

I suspect it goes without saying for most readers, but you should never copy code from elsewhere into your own code without understanding how it works, or which parts you may potentially need to modify.

The result of this sort of error is usually a complete failure to parse, which is at least simpler to find than the “plausible but not quite correct” pattern issue.

Pattern incompatibility issues

Some developers assume that a pattern which works in Java will work in Python, or the equivalent for any other pair of platforms. Don’t make this assumption. Always read the documentation – and if you’re porting code from one platform to another, you’ll need to “decode” the pattern with one set of documentation, then “encode” it with the other.

Time zone issues

Understanding time zones

There are two common issues when understanding what a time zone is to start with.

The first is to assume that a UTC offset (e.g. “+8 hours”) is the same as a time zone. This is an understandable mistake, given that a lot of documentation (from organizations which really should know better) misuse the terminology. The UTC offset is the difference between UTC and local time at a particular instant – so for example, while I’m writing this, I’m in the UK time zone which is currently at UTC+1. However, in the winter (in the same time zone) it will be at UTC+0. So if you have a value of (say) “2015-05-10T16:43:00+0100″ that only tells you the UTC offset – it doesn’t tell you the time zone. There may well be multiple time zones with the same offset at that particular time, but which will have different offsets at differ times.

The second mistake is to think that an abbreviation such as “EST” or “GMT” identifies a time zone. It doesn’t, in two ways:

  • A single time zone often uses multiple abbreviations over time. For example, “Pacific Time” varies between PST (Pacific Standard Time) and PDT (Pacific Daylight Time). It’s unfortunate that some people use the abbreviation for standard time even when they mean the general time zone – so even though currently (at the time of writing) Pacific Time is in PDT (UTC-7), some people would write the local time with “PST” at the end. Grr. Avoid abbrevations if you possibly can.
  • The same abbreviation may be used in multiple time zones, or even at different points in time to mean different things within the same time zone. For example, “BST” can mean British Summer Time in Europe/London (standard time of UTC+0, plus 1 hour of daylight saving time), British Standard Time in Europe/London (standard time of UTC+1, with no daylight saving time, around 1970 only) and Bougainville Standard Time in Pacific/Bougainville (UTC+11). Avoid abbreviations if you possibly can.

Using time zones in text formatting/parsing

First, you need to understand exactly what the library you’re using does with time zones, and what the types you’re using represent. One of the most common misconceptions here is with java.util.Date – this is just an instant in time, with no concept of a time zone or calendar system. The fact that the string returned from Date.toString always uses the system default time zone is unfortunately misleading in this respect, and causes developers to ask how to “convert” a Date from one time zone to another.

Next, you need to understand exactly what your data represents. In my experience, most textual data either specifies a date and/or time without a given time zone or it specifies a date and time with a UTC offset. When no time zone information is present, you may know the time zone it’s meant to refer to, or you may not. If you’re using a library which has multiple different types to represent different kinds of information (e.g. Joda Time, java.time or Noda Time) I personally find it clearest to parse to a type that closest represents the information actually stated in the text, and then convert it to something else where appropriate.

You definitely need to be aware when the parsing operation is going to impose any sort of time zone understanding on your data. This is the case with SimpleDateFormat in Java and with DateTime.ParseExact and friends in .NET. For SimpleDateFormat, unless you explicitly set a time zone (or the pattern includes a UTC offset), the system default time zone is used – this is usually not what you want. Parsing in .NET allows you to specify how you want the text to be understood, but you need to be careful. (The fact that DateTime sometimes represents a value in the system default time zone, sometimes a value in UTC, and sometimes a value with no associated time zone makes this all tricky.)

Locale / culture issues

Most libraries allow you to specify which culture to use when parsing (or formatting) data. This is a two-edged sword:

  • If you’re formatting a value to be displayed directly to an end user, that’s great: they can see the month name in their own language, etc. In this situation, you’ll typically use a “standard” format (e.g. “the short date/time format”)
  • If you’re formatting or parsing a value which is designed to be machine-readable (e.g. passed to a web service) then you almost certainly want the invariant culture instead of a user-specific culture. In this situation, you’ll typically use a “custom” format (e.g. “yyyy-MM-dd’T’HH:mm:ss”) or a specific culture-invariant format.

Culture can affect several aspects of handling conversions:

  • The calendar system used (e.g. the Gregorian calendar vs an Islamic calendar)
  • The “standard” formats used (e.g. month/day/year vs day/month/year)
  • The separators used (e.g. - vs / for date separators)
  • The month and day names used
  • The number system used

Converting unnecessarily

As a final common problem, you may be performing more conversions than you should be. For example, if you’ve got a DateTime field in the database but you’re passing a value as a string in your SQL parameter (you are using parameterized SQL, right?) then you probably shouldn’t be. Most platforms allow parameters to be specified as the value in a “native” representation. Likewise when you fetch a value, don’t just call toString on it and then parse the result – if the value is a date/time value, it should already be in a native representation; a simple cast (or call to the type-specific method) should be enough.

Conclusion

Date/time text handling is fraught with problems, as a simple look at Stack Overflow shows. Be careful, make sure you know exactly what you’re converting from and to, and check exactly what you’re specifying vs what you’re leaving implicit.

Precedence: ordering or grouping?

As I’ve mentioned before, I’m part of the technical group looking at updating the ECMA-334 C# standard to reflect the C# 5 Microsoft specification. I recently made a suggestion that I thought would be uncontroversial, but which caused some discussion – and prompted this “request for comment” post, effectively.

What does the standard say about precedence?

The current proposed standard includes the following text:

The order of evaluation of operators in an expression is determined by the precedence and associativity of the operators (ยง13.4.2).

Operands in an expression are evaluated from left to right.

When an expression contains multiple operators, the precedence of the operators controls the order in which the individual operators are evaluated. [Note: For example, the expression x + y * z is evaluated as x + (y * z) because the * operator has higher precedence than the binary + operator. end note]

I like the example in the note, but I’m not keen on the rest of the wording. It’s very easy to miss the difference between operands for any given expression always being evaluated left to right, and operators being evaluated in an order determined by precedence.

I’ve always thought about precedence in terms of grouping, not ordering – I think of one operator as “binding tighter” than another rather than as being “executed before” it. When I consider precedence, I mentally apply brackets to group operators and operands together explicitly. Eric Lippert has blogged along similar lines but I wouldn’t want to put words into his mouth by suggesting he agrees with me. Interestingly, he includes:

Order of evaluation rules describe the order in which each operand in an expression is evaluated.

That’s certainly true about the order of evaluation of operands in an expression, but as seen earlier precedence is also specified in terms of “order of evaluation”. To me, that’s what makes the standard confusing.

Importantly though, when I expressed this in a meeting, smarter people than me said that they exactly thought of precedence in terms of order of evaluation. Grouping was just another way of looking at it, but a sort of secondary approach.

What does “order of evaluation” even mean?

Let’s take a closer look at the wording of the standard. It’s fairly clear what “operands in an expression are evaluated from left to right” means (ignoring the possibility that the “left” operand actually occurs to the right of the “right” operand physically due to line breaks). The left operand is completely evaluated, from start to finish, before the right operand is evaluated. Great.

But what about the “order of evaluation of operators”? Here it’s trickier. Does “evaluating” a + b include first evaluating a and b? Should “order of evaluation” mean “order of starting to evaluate”? If so, the standard would actually be inaccurate. Let’s go back to the example of x + y * z. We can view that as a sequence of steps:

  1. Evaluate x.
  2. Evaluate y.
  3. Evaluate z.
  4. Multiply the results of steps 2 and 3.
  5. Add the results of steps 1 and 4.

Note that the multiplication definitely occurs before the addition. So that looks right. But if I rewrite it just a little to give more context (sorry about the bullets; it’s the only way I could get the formatting right):

  • Evaluate x + y * z
    • 1 Evaluate x
    • 2 Evaluate y * z
      • 2a Evaluate y
      • 2b Evaluate z
      • 2c Multiply the results of steps 2a and 2b; this is the result of step 2
    • 3 Add the results of steps 1 and 2; this is the result of the expression

At that point, it’s clear that we’ve started evaluating the + operator before we’ve started evaluating the * operator.

Similarly, if we view it as a tree:

    +
   / \
  x   *
     / \
    y   z

… we hit the + node before we hit the *node.

So from the “starting to evaluate” perspective, precedence appears to fall apart. The ordering only makes sense when you start talking about the operator performing its duty with the already-evaluated operands – which is tricky for ??, ?. and ?: which don’t always evaluate all their operands. I suspect that’s fixable though, with careful wording – and I’m influenced by the fact that the term “precedence” is naturally ordering-related (one thing preceding another). Maybe it’s as simple as talking about the order of completing evaluation of operands.

Non-conclusion: over to you

So, what should we do in the standard? Given the range of views on the technical group, I said I’d write this blog post and canvas opinion. Readers: how do you as readers think about precedence? How should the standard talk about precedence? Is any aspect of the existing wording (in C# 5 specification it’s section 7.3.1; in ECMA-334 4th edition it’s section 14.2.1) particularly helpful or confusing?

The technical group is full of very smart people – all of them smarter than me and with a deeper computer science (and/or C# compiler implementation) background. That makes me simultaneously nervous of proposing changes – but also confident in my role of “interested amateur” in that if I find something confusing, I suspect some other readers will too.

I’m in no way saying it’s wrong to think of precedence in terms of ordering – albeit with a more precise definition of ordering than we’ve got now – but I’m suggesting it’s not the most helpful way of expressing it for readers. Just to be entirely clear, I’m not suggesting any sort of semantic change – if we change the wording of the standard, it would be purely about clarification, with no behavioural change.

I had originally intended to make this blog post as “on the fence” as possible, but the more I’ve looked at it, the more I’ve reinforced my original position – I can only apologise for not being terribly even-handed. I’m very happy to be corrected though, and look forward to reading plenty of comments. Don’t be shy.

Backward compatibility pain

I’ve been getting a bit cross about backward compatibility recently. This post contains two examples of backward incompatibilities in .NET 4.6, and one example of broken code which isn’t being fixed due for backward compatibility reasons.

Let me start off by saying this post is not meant to be seen as an attack on Microsoft. I suspect that some people will read that first sentence as “This post is an attack on Microsoft, but I don’t want to say so.” It really isn’t. With my naturally positive disposition, I’m going to assume that the people behind the code and decisions in the examples below are smart people who have more information than I do. Their decisions may prove annoying to me personally, but that doesn’t mean those decisions are bad ones for the world at large.

The purpose of this post is partly just because I think readers will find it interesting, and partly to show how there are different things to consider when it comes to backward compatibility in APIs.

Example 1: Enumerable.OrderByDescending is broken

OrderByDescending is broken when three facts are combined unfortunately:

  • IComparer.Compare is allowed to return any integer value, including int.MinValue. The return value is effectively meant to represent one of three results:
    • the first argument is “earlier than” the second (return a negative integer)
    • the two arguments are equal in terms of this comparison (return 0)
    • the first argument is “later than” the second (return a positive integer)
  • -int.MinValue (the unary negation of int.MinValue) is still int.MinValue, because the “natural” result would be outside the range of int. (Think about sbyte as being in the range -128 to 127 inclusive… what would -(-128) in sbyte arithmetic return?)
  • OrderByDescending uses unary negation to attempt to reverse the order returned by an “ascending” comparer.

I’ve blogged about this before, but for the sake of completeness, here’s an example showing that it’s broken. We use a custom comparer which delegates to a normal string comparer – but only ever returns int.MinValue, 0 or int.MaxValue. Just to reiterate, this is an entirely legitimate comparer.

using System;
using System.Collections.Generic;
using System.Linq;

class OrderByDescendingBug
{
    static void Main()
    {
        var comparer = new MaximalComparer<string>(Comparer<string>.Default);
        string[] input = { "apples", "carrots", "dougnuts", "bananas" };

        var sorted = input.OrderByDescending(x => x, comparer);
        Console.WriteLine(string.Join(", ", sorted));
    }
}

class MaximalComparer<T> : IComparer<T>
{
    private readonly IComparer<T> original;

    public MaximalComparer(IComparer<T> original)
    {
        this.original = original;
    }

    public int Compare(T first, T second)
    {
        int originalResult = original.Compare(first, second);
        return originalResult == 0 ? 0
            : originalResult < 0 ? int.MinValue
            : int.MaxValue;
    }
}

We would like the result of this program to be “doughnuts, carrots, bananas, apples” but on my machine (using .NET 4.6 from VS2015 CTP6) it’s “carrots, dougnuts, apples, bananas”.

Naturally, when I first discovered this bug, I filed it in Connect. Unfortunately, the bug has been marked as closed. This comment was logged in 2011:

Swapping arguments in the call to comparer.Compare as you point out would be the most straightforward and general way to support this. However, while well-behaved implementations of comparer.Compare should handle this fine, there may be some implementations out there with subtle bugs that are not expecting us to reverse the order in which we supply these arguments for a given list. Given our focus on runtime compatibility for the next release, we won’t be able to fix this bug in the next version of Visual Studio, though we’ll definitely keep this in mind for a future release!

Fast, backward compatible, correct – pick any two

The clean solution here – reversing the order of the arguments – isn’t the only way of correcting it. We could use:

return -Math.Sign(original.Compare(x, y));

This still uses unary negation, but now it’s okay, because Math.Sign will only return -1, 0 or 1. It’s very slightly more expensive than the clean solution, of course – there’s the call to Math.Sign and the unary negation. Still, at least it works.

What I object to here is the pandering to incorrect code (implementations of IComparer which don’t obey its contract, by making assumptions about the order in which values will be passed) at the expense of correct code (such as the example above; the use of int.MinValue is forced here, but it can crop up naturally too – in a far harder-to-reproduce way, of course). While I can (unfortunately) believe that there are implementations which really are that broken, I don’t think the rest of us should have to suffer for it. I don’t think we should have to suffer at all, but I’d rather suffer a slight piece of inefficiency (the additional Math.Sign call (which may well be JIT-compiled into a single machine instruction – I haven’t checked) than suffer the current correctness issue.

Example 2: TimeZoneInfo becomes smarter in .NET 4.6

A long time ago, Windows time zones had no historical information – they were just a single pair of rules about when the zone started and stopped observing daylight saving time (assuming it did at all).

That improved over time, so that a time zone had a set of adjustment rules, each of which would be in force for a certain portion of history. This made it possible to represent the results of the Energy Policy Act of 2005 for example. These are represented in .NET using TimeZoneInfo.AdjustmentRule, which is slightly under-documented and has suffered from some implementation issues in the past. (There’s also the matter of the data used, but I treat that as a different issue.)

Bugs aside, the properties of TimeZoneInfo and its adjustment rules allowed an interested developer (one wanting to expose the same information in a different form for a better date/time API, as one entirely arbitrary example) to predict the results of the calculations within TimeZoneInfo itself – so the value returned by a call to TimeZoneInfo.GetUtcOffset(DateTime) could be predicted by looking at the standard UTC offset of the time zone, working out which rule was in effect for the specified DateTime, working out if that rule means that DST was being observed at the time, and adjusting the result accordingly.

As of .NET 4.6, it appears this will no longer be the case – not in a straightforward manner. One aspect of inflexibility in TimeZoneInfo is being eliminated: the inability to change standard offset. In the past, if a time zone changed its understanding of “standard time” (as the UK did between 1968 and 1971, for example), that couldn’t be cleanly represented in the TimeZoneInfo data model, leading to some very odd data with “backward” DST offsets to model the situation as nearly as possible.

Now, it seems that each adjustment rule also “knows” the difference between its standard offset and that of the zone as a whole. For the most part, this is a good thing. However, it’s a pain for anyone who works with TimeZoneInfo.AdjustmentRule directly, as the information simply isn’t available on the rule. (This is only a CTP of .NET 4.6, of course – it might become available before the final release.)

Fortunately, one can infer the information by asking the zone itself for the UTC offset of one arbitrary point in the adjustment rule, and then compare that with what you’d predict using just the properties of TimeZoneInfo and AdjustmentRule (taking into account the fact that the start of the rule may already be in DST). So long as the rule performs its internal calculations correctly (and from what I’ve seen, it’s now a lot better than it used to be, though not quite perfect yet) we can predict the result of GetUtcOffset for all other DateTime values.

It’s not clear to me why the information isn’t just exposed with a new property on the rule, however. It’s a change in terms of what’s available, sure – but anyone using the new implementation directly would have to know about the change anyway, as the results of using the exposed data no longer match the results of GetUtcOffset.

Example 3: PersianCalendar and leap years

If you thought the previous two examples were obscure, you’ll love this one.

PersianCalendar is changing in .NET 4.6 to use a more complicated leap year formula. The currently documented formula is:

A leap year is a year that, when divided by 33, has a remainder of 1, 5, 9, 13, 17, 22, 26, or 30.

So new PersianCalendar().IsLeapYear(1) has always returned true – until now. It turns out that Windows 10 is going to support the Persian Calendar (also known as the Solar Hijri calendar) in certain locales, and it’s going to do so “properly” – by which I mean, with a more complex leap year computation. This is what’s known as the “astronomical” Persian calendar and it follows the algorithm described in Calendrical Calculations. The BCL implementation is going to be consistent with that Windows calendar, which makes sense.

It’s worth noting that this calendar has the same leap year pattern as the “simple” one for over 320 years around the modern era (Gregorian 1800 to Gregorian 2123) so it’s really only people dealing with long-past dates in the Persian calendar who will notice the difference. Of course, I noticed because I have a unit test checking that my implementation of the Persian calendar in Noda Time is equivalent to the BCL’s implementation. It was fine until I installed Visual Studio 2015 CTP6…

As it happens, there’s another Persian calendar to consider – the “arithmetic” or “algorithmic” Persian calendar proposed by Ahmad Birashk. This consists of three hierarchical cycles of years (either cycles, subcycles and sub-subcycles or cycles, grand cycles and great grand cycles depending on whether you start with the biggest kind and work in, or start at the smallest and work out).

For Noda Time 2.0, I’m now going to support all three forms: simple (for those who’d like compatibility with the “old” BCL implementation), astronomical and arithmetic.

Conclusion

Backwards compatibility is hard. In all of these cases there are reasons for the brokenness, whether that’s just brokenness against correctness as in the first case, or a change in behaviour as in the other two. I think for localization-related code, there’s probably a greater tolerance of change – or there should be, as localization data changes reasonably frequently.

For the second and third cases, I think it’s reasonable to say that compatibility has been broken in a reasonable cause – particular for the second case, where the time zone data can be much more sensible with the additional flexibility of changing the UTC offset of standard time over history. It’s just a shame there’s fall-out.

The changes I would make if I were the only developer in the world would be:

  • Fix the first case either by ignoring broken comparer implementations, or by taking the hit of calling Math.Sign.
  • Improve the second case by adding a new property to AdjustmentRule and publicising its existence in large, friendly letters.
  • Introduce a new class for the third case instead of modifying the behaviour of the existing class. That would certainly be best for me – but for most users, that would probably introduce more problems than it solved. (I suspect that most users of the Persian calendar won’t go outside the “safe” range where the simple and astronomical calendars are the same anyway.)

One of the joys of working on Noda Time 2.0 at the moment is that it’s a new major version and I am willing to break 1.x code… not gratuitously, of course, but where there’s good reason. Even so, there are painful choices to be made in some cases, where there’s a balance between a slight ongoing smell or a clean break that might cause issues for some users if they’re not careful. I can only imagine what the pain is like when maintaining a large and mature codebase like the BCL – or the Windows API itself.

Clean event handler invocation with C# 6

The problem

Invoking event handlers in C# has always been a bit of a pain, because an event with no subscribers is usually represented as a null reference. This leads to code like this:

public event EventHandler Foo;

public void OnFoo()
{
    EventHandler handler = Foo;
    if (handler != null)
    {
        handler(this, EventArgs.Empty);
    }   
}

It’s important to use the handler local variable, as if instead you access the field twice, it’s possible that the last subscriber will unsubscribe between the check and the invocation:

// Bad code, do not use!
if (Foo != null)
{
    // Foo could be null here, if the class is intended
    // to be used from other threads.
    Foo(this, EventArgs.Empty);
}

Now this can be simplified slightly using an extension method:

public static void Raise(this EventHandler handler, object sender, EventArgs args)
{
    if (handler != null)
    {
        handler(sender, args);
    }   
}

Then in each event, you can write a single line:

public void OnFoo()
{
    Foo.Raise(this, EventArgs.Empty);
}

However, this means having a different extension method for each delegate type. It’s not too bad if you’re using EventHandler but it’s still not ideal.

C# 6 to the rescue!

The null-conditional operator (?.) in C# 6 isn’t just for properties. It can also be used for method calls. The compiler does the right thing (evaluating the expression only once) so you can do without the extension method entirely:

public void OnFoo()
{
    Foo?.Invoke(this, EventArgs.Empty);
}

Hooray! This will never throw a NullReferenceException, and doesn’t need any extra utility classes.

Admittedly it might be nicer if you could write Foo?(this, EventArgs.Empty) but that would no longer be a ?. operator, so would complicate the language quite a bit, I suspect. The extra slight cruft of Invoke really doesn’t bother me much.

What is this thing you call thread-safe?

The code we’ve got so far is “thread-safe” in that it doesn’t matter what other threads do – you won’t get a NullReferenceException from the above code. However, if other threads are subscribing to the event or unsubscribing from it, you might not see the most recent changes for the normal reasons of memory models being complicated.

As of C# 4, field-like events are implemented using Interlocked.CompareExchange, so we can just use a corresponding Interlocked.CompareExchange call to make sure we get the most recent value. There’s nothing new about being able to do that, admittedly, but it does mean we can just write:

public void OnFoo()
{
    Interlocked.CompareExchange(ref Foo, null, null)?.Invoke(this, EventArgs.Empty);
}

with no other code, to invoke the absolute latest set of event subscribers, without failing if a NullReferenceException is thrown. Thanks to David Fowler for reminding me about this aspect.

Admittedly the CompareExchange call is ugly. In .NET 4.5 and up, there’s Volatile.Read which may do the tricky, but it’s not entirely clear to me (based on the documentation) whether it actually does the right thing. (The summary suggests it’s about preventing the movement of later reads/writes earlier than the given volatile read; we want to prevent earlier writes from being moved later.)

public void OnFoo()
{
    // .NET 4.5+, may or may not be safe...
    Volatile.Read(ref Foo)?.Invoke(this, EventArgs.Empty);
}

… but that makes me nervous in terms of whether I’ve missed something. Expert readers may well be able to advise me on why this is sufficiently foolish that it’s not in the BCL.

An alternative approach

One alternative approach I’ve used in the past is to create a dummy event handler, usually using the one feature that anonymous methods have over lambda expressions – the ability to indicate that you don’t care about the parameters by not even specifying a parameter list:

public event EventHandler Foo = delegate {}

public void OnFoo()
{
    // Foo will never be null
    Volatile.Read(ref Foo).Invoke(this, EventArgs.Empty);   
}

This has all the same memory barrier issues as before, but it does mean you don’t have to worry about the nullity aspect. It looks a little odd and presumably there’s a tiny performance penalty, but it’s a good alternative option to be aware of.

Conclusion

Yup, C# 6 rocks again. Really looking forward to the final release.

C# 7 feature request #1… extension attributes

Last week I learned that using static is going to be the syntax for importing static members (including extension methods) in C# 6. That fulfils a feature request I made in September 2005 (my fourth ever blog post, as it happens). With a feature request turnaround of 10 years, I figure I should get put everything I could ever want out there now… (Just kidding really – more seriously, I’m really pleased to see this change in C# 6, relative to both C# 5 and the earlier designs of C# 6.)

Last week at CodeMash, Dustin Campbell demonstrated one Roslyn DiagnosticAnalyzer for ArgumentNullException and friends, and I demonstrated one I’d written for the JetBrains InvokerParameterName attribute. The two diagnostics check largely the same thing: some parameters are “special” in that the argument for them should be a parameter in the current context, and should use the C# 6 nameof operator.

The most significant difference between them (beyond quality – it’s not hard to spot which is written by a C# team member…) is that my diagnostic spots these special parameters because they are decorated with a particular attribute, whereas Dustin’s has the relevant information hard-coded within the diagnostic. Mine can’t spot the ArgumentNullException constructor, and Dustin’s can’t spot the Preconditions.CheckNotNull method in Noda Time. It would be really nice if I could pretend that certain members of existing types had particular attributes applied to them. For example, wouldn’t it be nice if I could write something like:

using JetBrains.Annotations;

namespace System
{
    public extern partial class ArgumentNullException
    {
        public ArgumentNullException([InvokerParameterName] string paramName);
    }
}

The rules would be roughly like this:

  • A type declared with public extern partial modifiers would indicate that attributes should be applied to an existing type.
  • Each member listed would be checked for existence, and any attributes present in the declaration would be noted in the IL for the “extending” assembly.
  • A new call on Type, MethodInfo etc would be created to allow all attributes to be fetched for a member, whether “extended” or not. This would find all attributes contributed by all assemblies loaded in the current AppDomain (so you’d need to make sure that you did something to initialize any assembly containing extension attributes).
  • A similar new call would be available within Roslyn – we’d need to think carefully about which assemblies that examined, of course.

By making this an “opt-in” mechanism from the examiner perspective, it would be safe – anything dealing in security attributes would want to use the existing mechanism, for example.

This wouldn’t just be useful for diagnostic purposes, mind you. There are a number of situations where you might want to augment existing types with more metadata, whether those attributes are your own custom ones or existing ones in the framework. Want to apply an attribute-based serialization framework to a different third-party type? Sure, just use those extensions. Want to give system enums localization-oriented attributes for your own framework? No problem.

I’m not actually expecting this feature to go very far – it’s relatively niche, as well as possibly requiring CLR changes (rather than just framework and language changes). Still, having thought of it, I decided it would be odd to keep it to myself. And hey, it’s a good excuse to create the C# 7 category on my blog…

C# 6 in action

Now that the Visual Studio 2015 Preview is available and the C# 6 feature set is a bit more stable, I figured it was time to start updating the Noda Time 2.0 source code to C# 6. The target framework is still .NET 3.5 (although that might change; I gather very few developers are actually going to be hampered by a change to target 4.0 if that would make things easier) but we can still take advantage of all the goodies C# 6 has in store.

I’ve checked all the changes into a dedicated branch which will only contain changes relevant to C# 6 (although a couple of tiny other changes have snuck in). When I’ve got round to updating my continuous integration server, I’ll merge onto the default branch, but I’m in no rush. (I’ll need to work out what to do about Mono at that point, too – there are various options there.)

In this post, I’ll go through the various C# 6 features, and show how useful (or otherwise) they are in Noda Time.

Read-only automatically implemented properties (“autoprops”)

Finally! I’ve been waiting for these for ages. You can specify a property with just a blank getter, and then assign it a value from either the declaration statement, or within a constructor/static constructor.

So for example, in DateTimeZone, this:

private static readonly DateTimeZone UtcZone = new FixedDateTimeZone(Offset.Zero);
public static DateTimeZone Utc { get { return UtcZone; } }

becomes

public static DateTimeZone Utc { get; } = new FixedDateTimeZone(Offset.Zero);

and

private readonly string id;
public string Id { get { return id; } }
protected DateTimeZone(string id, ...)
{
    this.id = id;
    ...
}

becomes

public string Id { get; }
protected DateTimeZone(string id, ...)
{
    this.Id = id;
    ...
}

As I mentioned before, I’ve been asking for this feature for a very long time – so I’m slightly surprised to find myself not entirely positive about the results. The problem it introduces isn’t really new – it’s just one that I’m not used to, as I haven’t used automatically-implemented properties much in a large code base. The issue is consistency.

With separate fields and properties, if you knew you didn’t need any special behaviour due to the properties when you accessed the value within the same type, you could always use the fields. With automatically-implemented properties, the incidental fact that a field is also exposed as a property changes the code – because now the whole class refers to it as a property instead of as a field.

I’m sure I’ll get used to this – it’s just a matter of time.

Initial values for automatically-implemented properties

The ability to specify an initial value for automatically-implemented properties applies to writable properties as well. I haven’t used that in the main body of Noda Time (almost all Noda Time types are immutable), but here’s an example from PatternTestData, which is used to provide data for text parsing/formatting tests. The code before:

internal CultureInfo Culture { get; set; }

public PatternTestData(...)
{
    ...
    Culture = CultureInfo.InvariantCulture;
}

And after:

internal CultureInfo Culture { get; set; } = CultureInfo.InvariantCulture;

public PatternTestData(...)
{
    ...
}

It’s worth noting that just like with a field initializer, you can’t call instance members within the same class when initializing an automatically-implemented property.

Expression-bodied members

I’d expected to like this feature… but I hadn’t expected to fall in love with it quite as much as I’ve done. It’s really simple to describe – if you’ve got a read-only property, or a method or operator, and the body is just a single return statement (or any other simple statement for a void method), you can use => to express that instead of putting the body in braces. It’s worth noting that this is not a lambda expression, nor is the compiler converting anything to delegates – it’s just a different way of expressing the same thing. Three examples of this from LocalDateTime (one property, one operator, one method – they’re not next to each other in the original source code, but it makes it simpler for this post):

public int Year { get { return date.Year; } }

public static LocalDateTime operator +(LocalDateTime localDateTime, Period period)
{
    return localDateTime.Plus(period);
}

public static LocalDateTime Add(LocalDateTime localDateTime, Period period)
{
    return localDateTime.Plus(period);
}

becomes

public int Year => date.Year;

public static LocalDateTime operator +(LocalDateTime localDateTime, Period period) =>
    localDateTime.Plus(period);

public static LocalDateTime Add(LocalDateTime localDateTime, Period period) =>
    localDateTime.Plus(period);

In my actual code, the operator and method each only take up a single (pretty long) line. For some other methods – particularly ones where the body has a comment – I’ve split it into multiple lines. How you format your code is up to you, of course :)

So what’s the benefit of this? Why do I like it? It makes the code feel more functional. It makes it really clear which methods are just shorthand for some other expression, and which really do involve a series of steps. It’s far too early to claim that this improves the quality of the code or the API, but it definitely feels nice. One interesting data point – using this has removed about half of the return statements across the whole of the NodaTime production assembly. Yup, we’ve got a lot of properties which just delegate to something else – particularly in the core types like LocalDate and LocalTime.

The nameof operator

This was a no-brainer in Noda Time. We have a lot of code like this:

public void Foo([NotNull] string x)
{
    Preconditions.CheckNotNull(x, "x");
    ...
}

This trivially becomes:

public void Foo([NotNull] string x)
{
    Preconditions.CheckNotNull(x, nameof(x));
    ...
}

Checking that every call to Preconditions.CheckNotNull (and CheckArgument etc) uses nameof and that the name is indeed the name of a parameter is going to be one of the first code diagnostics I write in Roslyn, when I finally get round to it. (That will hopefully be very soon – I’m talking about it at CodeMash in a month!)

Dictionary initializers

We’ve been able to use collection initializers with dictionaries since C# 3, using the Add method. C# 6 adds the ability to use the indexer too, which leads to code which looks more like normal dictionary access. As an example, I’ve changed a “field enum value to delegate” dictionary in TzdbStreamData from

private static readonly Dictionary<TzdbStreamFieldId, Action<Builder, TzdbStreamField>> FieldHanders =
    new Dictionary<TzdbStreamFieldId, Action<Builder, TzdbStreamField>>
{
   { TzdbStreamFieldId.StringPool, (builder, field) => builder.HandleStringPoolField(field) },
   { TzdbStreamFieldId.TimeZone, (builder, field) => builder.HandleZoneField(field) },
   { TzdbStreamFieldId.TzdbIdMap, (builder, field) => builder.HandleTzdbIdMapField(field) },
   ...
}

to:

private static readonly Dictionary<TzdbStreamFieldId, Action<Builder, TzdbStreamField>> FieldHanders =
    new Dictionary<TzdbStreamFieldId, Action<Builder, TzdbStreamField>>
{
    [TzdbStreamFieldId.StringPool] = (builder, field) => builder.HandleStringPoolField(field),
    [TzdbStreamFieldId.TimeZone] = (builder, field) => builder.HandleZoneField(field),
    [TzdbStreamFieldId.TzdbIdMap] = (builder, field) => builder.HandleTzdbIdMapField(field),
...
}

One downside of this is that the initializer will now not throw an exception if the same key is specified twice. So whereas the bug in this code is obvious immediately:

var dictionary = new Dictionary<string, string>
{
    { "a", "b" },
    { "a", "c" }
};

if you convert it to similar code using the indexer:

var dictionary = new Dictionary<string, string
{
    ["a"] = "b",
    ["a"] = "c",
};

… you end up with a dictionary which only has a single value.

To be honest, I’m now pretty used to the syntax which uses Add – so even though there are some other dictionaries initialized with collection initializers in Noda Time, I don’t think I’ll be changing them.

Using static members

For a while I didn’t think I was going to use this much – and then I remembered NodaConstants. The majority of the constants here are things like MillisecondsPerHour, and they’re used a lot in some of the core types like Duration. The ability to add a using directive for a whole type, which imports all the members of that type, allows code like this:

public int Seconds =>
    unchecked((int) ((NanosecondOfDay / NodaConstants.NanosecondsPerSecond) % NodaConstants.SecondsPerMinute));

to become:

using NodaTime.NodaConstants;
...
public int Seconds =>
    unchecked((int) ((NanosecondOfDay / NanosecondsPerSecond) % SecondsPerMinute));

Expect to see this to be used a lot in trigonometry code, making all those calls to Math.Cos, Math.Sin etc a lot more readable.

Another benefit of this syntax is to allow extension methods to be imported just from an individual type instead of from a whole namespace. In Noda Time 2.0, I’m introducing a NodaTime.Extensions namespace with extensions to various BCL types (to allow more fluent conversions such as DateTimeOffset.ToOffsetDateTime()) – I suspect that not everyone will want all of these extensions to be available all the time, so the ability to import selectively is very welcome.

String interpolation

We don’t use the system default culture much in Noda Time, which the string interpolation feature always does, so string interpolation isn’t terribly useful – but there are a few cases where it’s handy.

For example, consider this code:

throw new KeyNotFoundException(
    string.Format("No calendar system for ID {0} exists", id));

With C# 6 in the VS2015 preview, this has become

throw new KeyNotFoundException("No calendar system for ID \{id} exists");

Note that the syntax of this feature is not finalized yet – I expect to have to change this for the final release to:

throw new KeyNotFoundException($"No calendar system for ID {id} exists");

It’s always worth considering places where a feature could be used, but probably shouldn’t. ZoneInterval is one such place. Its ToString() feature looks like this:

public override string ToString() =>
    string.Format("{0}: [{1}, {2}) {3} ({4})",
        Name,
        HasStart ? Start.ToString() : "StartOfTime",
        HasEnd ? End.ToString() : "EndOfTime",
        WallOffset,
        Savings);

I tried using string interpolation here, but it ended up being pretty horrible:

  • String literals within string literals look very odd
  • An interpolated string literal has to be on a single line, which ended up being very long
  • The fact that two of the arguments use the conditional operator makes them harder to read as part of interpolation

Basically, I can see string interpolation being great for “simple” interpolation with no significant logic, but less helpful for code like this.

Null propagation

Amazingly, I’ve only found a single place to use null propagation in Noda Time so far. As a lot of the types are value types, we don’t do a lot of null checking – and when we do, it’s typically to throw an exception if the value is null. However, this is the one place I’ve found so far, in BclDateTimeZone.ForSystemDefault. The original code is:

if (currentSystemDefault == null || currentSystemDefault.OriginalZone != local)

With null propagation we can handle “we don’t have a cached system default” and “the cached system default is for the wrong time zone” with a single expression:

if (currentSystemDefault?.OriginalZone != local)

(Note that local will never be null, or this transformation wouldn’t be valid.)

There may well be a few other places this could be used, and I’m certain it’ll be really useful in a lot of other code – it’s just that the Noda Time codebase doesn’t contain much of the kind of code this feature is targeted at.

Conclusion

What a lot of features! C# 6 is definitely a “lots of little features” release rather than the “single big feature” releases we’ve seen with C# 4 (dynamic) and C# 5 (async). Even C# 3 had a lot of little features which all served the common purpose of LINQ. If you had to put a single theme around C# 6, it would probably be making existing code more concise – it’s the sort of feature set that really lends itself to this “refactor the whole codebase to tidy it up” approach.

I’ve been very pleasantly surprised by how much I like expression-bodied members, and read-only automatically implemented properties are a win too (even though I need to get used to it a bit). Other features such as static imports are definitely welcome to remove some of the drudgery of constants and provide finer-grained extension method discovery.

Altogether, I’m really pleased with how C# 6 has come together – I’m looking forward to merging the C# 6 branch into the main Noda Time code base as soon as I’ve got my continuous integration server ready for it…

When is an identifier not an identifier? (Attack of the Mongolian Vowel Separator)

Here’s a few things you may not be aware of:

  • C# identifiers can include Unicode escape sequences (\u1234 etc)
  • C# identifiers can include Unicode characters in the category “Other, formatting” (Cf) but these are ignored when comparing identifiers for equality
  • The Mongolian Vowel Separator (U+180E) has oscillated between the Cf and Zs categories a couple of times
  • .NET has its own copy of Unicode categories, separate from whatever Win32 might provide
  • Roslyn (built in .NET) uses the Unicode categories, whereas csc.exe (the “old” native C# compiler) uses either the Win32 categories or a built-in copy
  • Neither the .NET table nor the Win32 table necessarily reflects exactly what any one version of the Unicode standard says
  • Compilers can have bugs in

Put them together, and chaos ensues!

How this all started – blame Vladimir

I started looking at this based on a discussion in our ECMA technical group meeting last week, when we were considering the normative references – and in particular, which version of Unicode we were going to target. Currently the ECMA 4th edition spec targets Unicode 4.0 and the Microsoft C# 5 specification targets Unicode 3.0. It’s not clear to me whether any compilers actually take note of this, and moving forward we’d like both the ECMA and Microsoft standards to not specify a particular version of Unicode, effectively encouraging compiler authors to use the most recent one available to them. Despite the wrinkles listed below, I think that makes the most sense for real world uses – it’s crazy to require compilers to ship with their own private copies of Unicode, effectively.

When discussing this, Vladimir Reshetnikov mentioned the Mongolian Vowel Separator (U+180E) which has had an interesting life. It was introduced in Unicode 3.0.0, when it was in the Cf category (“other, formatting”). Then in Unicode 4.0.0 it was moved into the Zs (separator, space) category. In Unicode 6.3.0 it was then moved back to the Cf category.

Of course, my natural inclination was to try to abuse this. My initial aim was to come up with code which behaved differently depending on which version of Unicode the compiler was using. It turned out to be a little more complicated than that, but we’ll assume a hypothetical compiler first, with no bugs, but which obeys whichever version of the Unicode standard we want it to. (Arguably that’s already a bug given the requirements of the current C# specs, but we’ll set that aside.)

Hypothetical example 1: valid or invalid

For simplicity, let’s start with some source code which is all in ASCII:

class MvsTest
{
    static void Main()
    {
        string stringx = "a";
        string\u180ex = "b";
        Console.WriteLine(stringx);
    }
}

If the compiler is using Unicode 6.3 or higher (or a version earlier than 4.0) then U+180E is deemed to be in the Cf category, and is therefore valid within an identifier. In that case, it’s fine for it to be escaped as per the code above. At that point, the identifier in the second line of the method is deemed to be “the same as” stringx, so the output is b.

What about a compiler using a version of Unicode between 4.0 and 6.2 (inclusive) though? At that point, U+180E is deemed to be in the Zs category, which makes it a whitespace character. Zs characters are allowed as whitespace within C# programs – but not within identifiers. Once it’s not a valid identifier – and because this isn’t within a character/string literal – it’s invalid to use the Unicode escape sequence, so the code doesn’t compile.

Hypothetical example 2: valid two different ways

We can write the same code without using an escape sequence, however. If you create a regular ASCII file like this:

class MvsTest
{
    static void Main()
    {
        string stringx = "a";
        stringAAAx = "b";
        Console.WriteLine(stringx);
    }
}

then open it up in a hex editor and replace the AAA with bytes E1 A0 8E, then you’ve got a file containing the UTF-8 representation of U+180E at the same location as we had the Unicode escape sequence in the first version.

So, a compiler which compiled the first version would still compile this version (assuming you could tell it that the source code was UTF-8), and the results would be the same – it would print b as the second statement of the method would be a simple assignment to the existing variable.

However, a compiler which treats U+180E as whitespace and would therefore treat the first program as an error would accept this program – and treat that second statement as a declaration of a second local variable (x) and assign it an initial value. You might get a warning about the variable being unused, but it’s valid C# and the output is a.

Reality: the Microsoft compilers

Whenever we talk about the Microsoft C# compiler these days, we need to distinguish between the “native” compiler (csc) and Roslyn (rcsc, although typically I just call it Roslyn).

As it’s written in native code, csc uses whatever Windows supplies for its Unicode character tables – or it embeds it directly in the executable, potentially. (I’ve been scouring MSDN to find a Win32 native function to tell me the Unicode category of a specific code point, and failed so far. It would have been useful…)

Compare that with Roslyn, which is written in C# and (as far as I’m aware) uses char.GetUnicodeCategory – which in turn uses the Unicode tables built into mscorlib.

My experiments suggest that whatever the native compiler uses to get the Unicode category has treated U+180E as a formatting character forever. At least, I’ve tried to find old machines (including VM images) which haven’t had Windows update applied since September 2013 (which is when Unicode 6.3 was published) and they all compile the first program listed above. I’m beginning to suspect that csc might actually have a copy of Unicode 3.0 built into it; it certainly treats U+180E as a formatting character, but doesn’t like either U+0600 or U+00AD within identifiers. (U+0600 wasn’t introduced until Unicode 4.0, but has always been a formatting character; U+00AD was a “dash punctuation” character in Unicode 3.0, and became a formatting character in 4.0.)

The table built into mscorlib has definitely changed over time, however. If you run a simple program such as this:

using System;

class Test
{
    static void Main()
    {
        Console.WriteLine(Environment.Version);
        Console.WriteLine(char.GetUnicodeCategory('\u180e'));
    }
}

then running under CLRv2, the result is “SpaceSeparator” whereas running on CLRv4 (at least on a recently-updated system), the result is “Format”.

Of course, Roslyn won’t run under old CLRs, but we have hope by way of csharppad.com – which runs Roslyn in an environment (of uncertain origin – Mono? I’m unsure) which prints “SpaceSeparator” for the above. Sure enough the first program fails to compile – but it’s harder to check the second program, as csharppad.com doesn’t allow you to upload source code, and copy/paste produces some odd results.

Reality: mcs (Mono C# compiler)

Mono’s compiler uses the BCL GetUnicodeCategory code too, which should make it significantly simpler to experiment – but unfortunately, the Mono parser has (at least) two bugs in it:

  • It will allow any Unicode escape sequence in an identifier, whether it’s an escape sequence for a valid identifier part or not. For example, string\u0020x = "" is valid under the Mono compiler. Filed as bug 24968. Source.
  • It doesn’t allow formatting characters within identifiers – it includes characters in classes Mn, Mc, Nd and Pc, but not Cf. Filed as bug 24969. Source.

For this reason, the first program always compiles and prints “b” whereas the second program always fails to compile, regardless of whether U+180E is treated as being in Zs or Cf.

What version is this, anyway?

Next, let’s think about the Unicode data itself. It’s not at all clear which version any particular BCL implementation is actually using. Consider this little program:

using System;

class Test
{
    static void Main()
    {
        Console.WriteLine(char.GetUnicodeCategory('\u00ad'));
        Console.WriteLine(char.GetUnicodeCategory('\u0600'));
        Console.WriteLine(char.GetUnicodeCategory('\u180e'));
    }
}

On my computer, under CLR v4 this prints “DashPunctuation, Format, Format”, and under both Mono (3.3.0) and CLR v2 it prints “DashPunctuation, Format, SpaceSeparator”.

That’s very odd. It doesn’t correspond with any version of the Unicode standard, as far as I can tell:

  • U+00AD was a Po (other, punctuation) character in Unicode 1.x, then Pd (dash, punctuation) in 2.x and 3.x, and from 4.0 onwards has been Cf.
  • U+0600 was only introduced in Unicode 4.0, and has always been Cf
  • U+180E introduced as Cf in 3.0, then changed to Zs in 4.0, then back to Cf in 6.3.

So there is no version where the first line agrees with either the second line or the third line. I’m basically a bit baffled by this.

What about nameof and CallerMemberName?

The names of identifiers aren’t only used for comparisons – they’re available as strings without any reflection being involved at all. From C# 5, we’ve had CallerMemberName attribute, allowing things like:

public static void X\u0600y()
{
    ShowCaller();
}

public static void ShowCaller([CallerMemberName] string caller = null)
{
    Console.WriteLine("Called by {0}", caller);
}

And in C# 6, we can write:

string x\u0600y = "";
Console.WriteLine("nameof = {0}", nameof(x\u0600y));

What should those print? They do just print “Xy” and “xy” as the names (respectively), as if the compiler has simply thrown away the formatting character entirely. But what should they print? Bear in mind that in the second case, we could easily have used nameof(xy) and that would still have compared equal to the declared identifier.

We can’t even say “What’s the name of the member being declared?” because you can overload with “different but equal” identifiers:

public static void Xy() {}
public static void X\u0600y() {}
public static void X\u070fy() {}
...
Console.WriteLine(nameof(X\u200by));

What should that print out? I’m sure you’ll be relieved to hear that the C# team has a plan in place – but fundamentally this is one of these “no obvious right answer” scenarios. It gets even weirder when you bring the CLI specification into the mix. Section I.8.5.1 of ECMA-335 6th edition has:

Assemblies shall follow Annex 7 of Technical Report 15 of the Unicode Standard
3.0 governing the set of characters permitted to start and be included in identifiers, available online
at http://www.unicode.org/unicode/reports/tr15/tr15-18.html. Identifiers shall be in the
canonical format defined by Unicode Normalization Form C. For CLS purposes, two identifiers
are the same if their lowercase mappings (as specified by the Unicode locale-insensitive, one-to-one
lowercase mappings) are the same. That is, for two identifiers to be considered different
under the CLS they shall differ in more than simply their case. However, in order to override an
inherited definition the CLI requires the precise encoding of the original declaration be used.

I would love to explore the impact of this by adding a Cf character into IL, but unfortunately I haven’t worked out a way of affecting the encoding of ilasm, in order to persuade it that my hacked up IL is what I want it to be.

Conclusion

As noted before, text is hard.

It turns out that even when restricting oneself to identifiers, text is hard. Who would’ve thought?

Follow

Get every new post delivered to your Inbox.

Join 428 other followers