When is a constant not a constant? When it’s a decimal…

A comment on a Stack Overflow post recently got me delving into constants a bit more thoroughly than I have done before.

Const fields

I’ve been aware for a while that although you can specify decimal field as a const in C#, it’s not really const as far as the CLR is concerned. Let’s consider this class to start with:

class Test
{
    const int ConstInt32 = 5;
    const decimal ConstDecimal = 5;
}

Firstly, csc gives us a warning about ConstDecimal but not about ConstInt32:

Test.cs(4,19): warning CS0414: The field ‘Test.ConstDecimal’ is assigned but its value is never used

The Roslyn compiler (rcsc) doesn’t warn about either of the fields. This is just a curiosity, really – but it already shows that there’s some difference in how they’re compiled. When we delve into the IL, the difference is much more pronounced:

.class private auto ansi beforefieldinit Test
       extends [mscorlib]System.Object
{
  .field private static literal int32 ConstInt32 = int32(0x00000005)
  .field private static initonly valuetype [mscorlib]System.Decimal ConstDecimal
  .custom instance void [mscorlib]DecimalConstantAttribute::.ctor
      (uint8, uint8, uint32, uint32, uint32) = 
      ( 01 00 00 00 00 00 00 00 00 00 00 00 05 00 00 00 00 00 ) 

  // Skip the parameterless constructor

  .method private hidebysig specialname rtspecialname static 
          void  .cctor() cil managed
  {
    // Code size       12 (0xc)
    .maxstack  8
    IL_0000:  ldc.i4.5
    IL_0001:  newobj     instance void [mscorlib]System.Decimal::.ctor(int32)
    IL_0006:  stsfld     valuetype [mscorlib]System.Decimal Test::ConstDecimal
    IL_000b:  ret
  } // end of method Test::.cctor

} // end of class Test

First things to note:

ConstInt32 has the literal constraint in IL. From ECMA 335, I.8.6.1.2:

The literal constraint promises that the value of the location is actually a fixed value
of a built-in type. The value is specified as part of the constraint. Compilers are
required to replace all references to the location with its value, and the VES therefore
need not allocate space for the location. This constraint, while logically applicable to
any location, shall only be placed on static fields of compound types. Fields that are
so marked are not permitted to be referenced from CIL (they shall be in-lined to their
constant value at compile time), but are available using reflection and tools that
directly deal with the metadata.

Whereas ConstDecimal only has the initonly constraint. Again, from ECMA 335:

The init-only constraint promises (hence, requires) that once the location has been
initialized, its contents never change. Namely, the contents are initialized before any
access, and after initialization, no value can be stored in the location. The contents are
always identical to the initialized value (see §I.8.2.3). This constraint, while logically
applicable to any location, shall only be placed on fields (static or instance) of
compound types.

(Here “compound type” just means “not an array type” – although in quite a confusing manner. I would ignore it if I were you.)

Next, note that there’s an attribute (System.Runtime.CompilerServices.DecimalConstantAttribute – I’ve taken the namespace off in the listing for the sake of readability) applied to the field, which tells anything consuming the assembly that it should be a constant, and what its value is. Indeed, if you’re very careful, you can create your own “constants” like this:

[DecimalConstant((byte)0, (byte)0, (uint)0, (uint)0, (uint) 5)]
public static readonly decimal ConstDecimal;

That field declaration will be treated as a constant in other assembles – but not within the same assembly. So printing ConstDecimal within the same assembly will result in 0 (unless you change it to another value in the static initializer) whereas printing Test.ConstDecimal in a different assembly will result in 5. (It won’t even touch the field at execution time.) I’m sure I can work out some nasty ways of abusing that, if I try hard enough.

Note that the casts to uint are important – if you accidentally call the attribute constructor with a (byte, byte, int, int, int), the compiler doesn’t recognize it. (Interesting, the latter was only introduced in .NET 2.0. I’ve no idea why.)

Amusingly, you can combine the two:

[DecimalConstant((byte)0, (byte)0, (uint)0, (uint)0, (uint) 5)]
public const decimal ConstDecimal = 10;

In this case, the IL contains both DecimalConstant attributes, despite the fact that it’s only legal to apply one. (AllowMultiple is false on its AttributeUsage.) The compiler appears to pick the one specified by the value rather than manually applied, which is slightly disappointing, but of no real importance.

Optional parameters

In the case of const fields, we’ve really only cared about what the compiler does. Let’s try something where both the compiler and the framework can get involved: optional parameters.

Again, let’s write a little class to demonstrate how the default values of optional parameters are encoded in IL:

public class Test
{
    public void PrintInt32(int x = 10)
    {
        Console.WriteLine(x);
    }

    public void PrintDecimal(decimal d = 10m)
    {
        Console.WriteLine(d);
    }
}

The important bits of the generated IL are:

.method public hidebysig instance void PrintInt32([opt] int32 x) cil managed
{
    .param [1] = int32(0x0000000A)
    ...
} // end of method Test::PrintInt32

.method public hidebysig instance void PrintDecimal(
    [opt] valuetype [mscorlib]System.Decimal d) cil managed
{
    .param [1]
    .custom instance void [mscorlib] DecimalConstantAttribute::.ctor
    (uint8, uint8, uint32, uint32, uint32) =
    ( 01 00 00 00 00 00 00 00 00 00 00 00 0A 00 00 00 00 00 ) 
    ...
} // end of method Test::PrintDecimal

Again, we have a DecimalConstantAttribute to describe the default value for the decimal parameter, whereas . If you call the method but don’t specify an argument, the compiler notes the DecimalConstantAttribute applied to the method parameter, and constructs the value in the calling code. That’s not the only way of observing the default value, however – you can do it with reflection, too:

public static void Main()
{
    var method = typeof(Test).GetMethod("PrintDecimal");
    var parameter = method.GetParameters()[0];
    Console.WriteLine("Default value: {0}", parameter.DefaultValue);
}

As you’d expect, that prints a default value of 10. There’s no direct equivalent of DefaultValue for FieldInfo – there’s GetRawConstantValue() but that only works for constants that the CLR is directly aware of – it fails for a field like const decimal Foo = 10m , with an InvalidOperationException. I’ll talk more about CLR constants later.

Now let’s try something a bit more tricksy though… C# doesn’t support DateTime literals, unlike VB – but there’s a DateTimeConstantAttribute – what happens if we try to apply that ourselves? Let’s see…

public void PrintDateTime
    ([Optional, DateTimeConstant(635443315962469079L)] DateTime date)
{
    Console.WriteLine(date);
}

So if we call PrintDateTime(), what does that print? Well (leaving aside the formatting – the examples below use the UK default formatting):

  • With csc.exe (the “old” C# compiler), with the call in the same assembly as the method declaration, it prints 01/01/0001 00:00:00
  • With csc.exe, with the call in a different assembly to the method declaration, it prints 22/08/2014 19:13:16
  • With rcsc.exe (Roslyn), it prints 22/08/2014 19:13:16 regardless of where it’s called from
  • If you call it dynamically (dynamic d = new Test(); d.PrintDateTime();) it prints 22/08/2014 19:13:16 regardless of the compiler – at least with the version of .NET I’m using. It may well vary by version.

In every case, printing out the ParameterInfo.DefaultValue gives the right answer: the framework doesn’t care whether or not the compiler understands the attribute.

In case you’re wondering why I didn’t mention this possibility for constant fields – I tried it, and it didn’t work, even in Roslyn. For some reason optional parameters are treated as more “special” than constant fields.

Having got this far, why stop with DateTime? The DateTimeConstantAttribute class derives from CustomConstantAttribute (whereas DecimalConstantAttribute just derives from Attribute). So can I introduce my own constant attributes? Noda Time seems to be an obvious candidate for this – let’s try for a LocalDateConstantAttribute:

[AttributeUsage(AttributeTargets.Parameter | AttributeTargets.Field)]
public class LocalDateConstantAttribute : CustomConstantAttribute
{
    private readonly LocalDate value;

    public LocalDateConstantAttribute(int year, int month, int day)
    {
        value = new LocalDate(year, month, day);
    }

    public override object Value { get { return value; } }
}
...
public void PrintLocalDate(
    [Optional, LocalDateConstant(2014, 8, 22)] LocalDate date)
{
    Console.WriteLine(date);
}

How does this fare? Not so well, unfortunately:
– With a normal method call (regardless of assembly), it prints 01 January 1970 (the default value for a LocalDate in Noda Time 1.3)
– With a dynamic method call it prints 01 January 1970
– With reflection (i.e. using ParameterInfo.DefaultValue) the framework does construct the appropriate LocalDate, which seems logical as it’s presumably just using the Value property of the attribute

So, there’s still work to be done there. I think it would be feasible to do this in a general way, if it’s acceptable for an exception to be thrown if the Value property returns an incompatible type of object to the parameter type. The great thing is that Roslyn is open source, so I should be able to spike this myself, right? How hard can it be? (Cue howls of derisive laughter from the Roslyn team, who will know much better than I how hard it’s likely to really be.)

CLR constants and attribute arguments

So with constant fields, it was all down to the compiler, really. For optional parameters, it’s mostly still down to the compiler, but with framework support for reflection. What about attribute arguments? They’re the other notable aspect of C# which requires compile-time constants. For example, this is fine:

[Description("This is a constant")]

But this is not:

[Description("Initialized at " + DateTime.Now)]

Intriguingly, this is fine too:

[ContractClass(typeof(Foo))]

… despite the fact that

const Type ConstType = typeof(Foo);

isn’t valid. The only constant expression which is valid for type Type is null. So in section 17.2 of the C# 5 specification, Type is explicitly called out:

An expression E is an attribute-argument-expression if all of the following statements are true:

  • The type of E is an attribute parameter type (§17.1.3).
  • At compile-time, the value of E can be resolved to one of the following:
    • A constant value.
    • A System.Type object.
    • A one-dimensional array of attribute-argument-expressions.

(Interestingly, there’s no indication that I can see that the value of E has to be obtained via typeof, in the spec – clearly [ContractClass(Type.GetType("TodayIs" + DateTime.Today.Day))] should be invalid, but I can’t currently see which bit of the spec that violates. Something to be tightened up, potentially.)

And the “attribute parameter type” part – section 17.1.3 – looks like this:

The types of positional and named parameters for an attribute class are limited to the attribute parameter types, which are:

  • One of the following types: bool, byte, char, double, float, int, long, sbyte, short, string, uint, ulong, ushort.
  • The type object.
  • The type System.Type.
  • An enum type, provided it has public accessibility and the types in which it is nested (if any) also have public accessibility (§17.2).
  • Single-dimensional arrays of the above types.

Oops – no decimal. Note that the type of E that has to be one of those types, as well as the parameter type… so it doesn’t help to have a parameter of type object and then try to pass a constant decimal value as the argument.

Basically, attributes arguments are sufficiently low-level that the CLR itself wants to know about them – and while it has a deep knowledge of Type, string and the primitive value types, it doesn’t have much knowledge about decimal (or at least, it isn’t required to). Attribute arguments can’t use funky “custom constant” attributes to specify values like decimal or DateTime – you really are limited to the types that the CLR knows about. In a future version it’s not inconceivable that this could be broadened, but at the moment it’s pretty strict.

Conclusion

So, it turns out the idea of a constant isn’t terribly constant in itself. We have:

  • Constant fields, which are primarily a compile-time concern, and therefore language-specific.
  • Optional parameter default values, which feel like they ought to be just like constant fields (in that a value specified in one place is substituted in another) but apparently have a bit more support in the C# compiler… and more reflection support too.
  • Attribute arguments, which are the strictest form of constant I’ve found so far, in that they have to correspond to a small set of CLR “special” types.

I didn’t even talk about constant expressions (section 7.19 of the C# 5 spec) much in this post – maybe I’ll delve into those in more detail in another post.

Unlike my normal day-dreaming about changing the compiler, I think I really might have a crack at Roslyn for supporting arbitrary optional parameters – it feels like it could potentially be genuinely useful (which is also unlike most of my idle speculation).

So next time you ask yourself whether something is a constant, you should start off by asking yourself what you mean by “constant” in the first place.

The BobbyTables culture

I started writing a post like this a long time ago, but somehow never finished it.

Countless posts on Stack Overflow are vulnerable to SQL injection attacks. Along with several other users, I always raise this when it shows up – this is something that really just shouldn’t happen these days. It’s a well-understood issue,and parameterized SQL is a great solution in almost all cases. (No, it doesn’t work if you want to specify an column or table name dynamically. Yes, whitelisting is the solution there.)

The response usually falls into one of three camps:

  • Ah – I didn’t know about that. Great, I’ll fix it now. Thanks!
  • This is just a prototype. I’ll fix it for the real thing. (Ha! Like that ever happens.)
  • Well yes, in theory – but I’m just using numbers. That’s not a problem, is it?

Now personally I feel that you should just get the habit of using parameterized queries all the time, even when you could get away without it. This post is a somewhat tongue-in-cheek counterargument to the last of these responses. If you haven’t seen Bobby Tables, you really should. It’s the best 10-second explanation of SQL injection that I’ve ever seen, and I almost always drop a link to it when I’m adding a comment on a vulnerable query on Stack Overflow.

So in honour of Bobby, here’s a little program. See if you can predict the output.

using System;
using System.Globalization;
using System.Threading;

class Test
{
    static void Main()
    {
        string sql = "SELECT * FROM Foo WHERE BarDate > '" + DateTime.Today + "'";
        // Imagine you're executing the query here...
        Console.WriteLine(sql);

        int bar = -10;
        sql = "SELECT * FROM Foo WHERE BarValue = " + bar;
        // Imagine you're executing the query here...
        Console.WriteLine(sql);
    }

    // Some other code here...
}

Does that look okay? Not great, admittedly – but not too bad, right? Well, the output of the program is:

SELECT * FROM Foo WHERE BarDate > '2014-08-08' OR ' '=' '
SELECT * FROM Foo WHERE BarValue = 1 OR 1=1 OR 1=10

Yikes! Our queries aren’t filtering out anything!

Of course, the black magic is in “Some other code here” part:

static Test()
{
    InstallBobbyTablesCulture();
}

static void InstallBobbyTablesCulture()
{
    CultureInfo bobby = (CultureInfo) CultureInfo.InvariantCulture.Clone();
    bobby.DateTimeFormat.ShortDatePattern = @"yyyy-MM-dd'' OR ' '=''";
    bobby.DateTimeFormat.LongTimePattern = "";
    bobby.NumberFormat.NegativeSign = "1 OR 1=1 OR 1=";
    Thread.CurrentThread.CurrentCulture = bobby;
}

Neither numbers (well, negative numbers in this case) nor dates are safe. And of course if your database permissions aren’t set correctly, the queries could do a lot more than just remove any filtering. For extra fun, you can subvert some custom format strings – by changing the DateSeparator property, for example.

Even in sensible cultures, if the database expects you to use . for the decimal separator and you’re in a European culture that uses , instead, do you know how your database will behave? If you sanitize your input based on the numeric value, but then that isn’t the value that the database sees due to a string conversion, how comfortable are you that your application is still safe? It may not allow direct damage, but it could potentially reveal more data than you originally expected – which is definitely a vulnerability in a form.

Now the chances of me getting onto your system and installing the Bobby Tables culture – let alone making it the system default – are pretty slim, and if that happens you’ve probably got bigger problems anyway… but it’s the principle of the thing. You don’t care about a text representation of your values: you just want to get them to the database intact.

Parameterized SQL: just say yes.

Object pooling and thread safety

A few days ago, I was watching Dustin Campell’s excellent talk from TechEd 2013, “Essential truths everyone should know about performance in a large managed code base”. I highly recommend it, and it made me think a bit more about Noda Time’s performance. (It doesn’t take much to make me think about Noda Time’s performance, admittedly.)

Base situation: no pooling

Most of our formatting goes through something called SteppedPattern – that’s basically one multicast delegate (with lots of actions) for formatting, and a list of delegates for parsing. The Format method was really simple before this week:

public string Format(TResult value)
{
    StringBuilder builder = new StringBuilder();
    // This will call all the actions in the multicast delegate.
    formatActions(value, builder);
    return builder.ToString();
}

Very simple – but it allocates a new StringBuilder on each call, which is somewhat unnecessary. Allocation is cheap, but not free – and string formatting is something which could reasonably happen an awful lot.

In the talk, Dustin showed an example where introducing a single object pool – also for StringBuilder – had made marked difference. Unfortunately, in my case it’s not as simple as we might like. Or at least, I’m not confident that it is. Let’s look at the options I’ve considered so far – in no particular order.

Option 1: Lock-free using Interlocked

After construction, a formatter can be used on multiple threads, so some case is needed to make sure we don’t end up using one StringBuilder from multiple threads at a time. Interlocked.CompareExchange makes this reasonably straightforward:

private StringBuilder cachedBuilder = new StringBuilder();
...
public string Format(TResult value)
{
    // Try to borrow the cached builder, in a thread-safe way.
    // (If another thread is using the builder, this will return null.)
    StringBuilder builder =
        Interlocked.CompareExchange(ref cachedBuilder, null,
                                    cachedBuilder);
    if (builder == null)
    {
        builder = new StringBuilder();
    }
    else
    {
        builder.Length = 0;
    }
    // This will call all the actions in the multicast delegate.
    formatActions(value, builder);
    string ret = builder.ToString();
    // Whether we managed to borrow the cached builder or not,
    // we can replace it with the one we've used, now that we
    // know what to return.
    Volatile.Write(ref cachedBuilder, builder);
    return ret;
}

(See comments for discussion on the Volatile.Write call, which was previously just an assignment statement.)
I’m pretty confident that this does achieve the goal of only using the builder in one thread at a time. If an exception is thrown while formatting, that’s fine – we don’t end up putting anything back in the cache, and the next caller will allocate a new one.
I don’t believe we need to use CompareExchange a second time when writing to cachedBuilder – we’re certainly fine if it takes a while for that write to be seen by other threads, as the only downside would be not using the cached builder when we might want to.

But is that enough to make this code thread-safe? Is it possible that a write from one thread would only be seen in another thread after the second thread has borrowed the builder? Is it possible that the write to cachedBuilder actually occurs before the call to builder.ToString() completes, allowing another thread to start messing with the builder before we’ve got our result?

My guess is that it’s actually okay – but lock-free code fundamentally scares me, and I’m very cautious about writing it. (Elsewhere in Noda Time we do have some lock-free mutable data – but usually in the form of array-based caches where the element type is guaranteed to have atomic writes, and is immutable.)

So while this is “clever” and may be fine, it makes me nervous, which is not a good place to be.

Option 2: Locking

I’m much more confident about using threads when locking is involved. I find it easier to reason about what’s allowed and what’s not.

private readonly StringBuilder pooledStringBuilder = new StringBuilder();
private bool pooledBuilderInUse = false;
...

public string Format(TResult value)
{
    StringBuilder builder = null;
    lock (pooledStringBuilder)
    {
        if (!pooledBuilderInUse)
        {
            pooledBuilderInUse = true;
            builder = pooledStringBuilder;
            builder.Length = 0;
        }
    }
    try
    {
        builder = builder ?? new StringBuilder();
        // This will call all the actions in the multicast delegate.
        formatActions(value, builder);
        return builder.ToString();
    }
    finally
    {
        if (builder == pooledStringBuilder)
        {
            lock (pooledStringBuilder)
            {
                pooledBuilderInUse = false;
            }
        }
    }
}

Unlike the previous solution, this time we always know what the builder is (and never change it) – that gives us something to lock on. What we change within the lock is whether or not the builder is in use. This should be safe across multiple threads, but it’s ever so verbose – and in my benchmarks, the results weren’t terribly clear.

Option 3: ThreadStatic

The obvious alternative to sharing one builder between multiple threads is to have a
single builder per thread. There are two approaches to this in .NET:
[ThreadStatic] and ThreadLocal.

ThreadStatic is the older mechanism, and is simply a matter of decorating a static field with the [ThreadStatic] attribute. Each thread has a separate variable, as if by magic. So in principle, the implementation just looks like this:

[ThreadStatic]
private static StringBuilder cachedBuilder;

public string Format(TResult value)
{
    StringBuilder builder = cachedBuilder;
    if (cachedBuilder == null)
    {
        builder = new StringBuilder();
        cachedBuilder = builder;
    }
    else
    {
        builder.Length = 0;
    }
    // This will call all the actions in the multicast delegate.
    formatActions(value, builder);
    return builder.ToString();
}

Unfortunately we still need to check for nullity on every call, as any initialization
expression for the variable will only be executed by the first thread which uses the class. Still, the code is reasonably simple.

There’s a big “gotcha” with this though: our code is now not re-entrant. If one SteppedPattern has a format action which calls format on another SteppedPattern, we’ll end up trying to use the same builder for both operations, resetting the builder in the middle.

Fortunately, SteppedPattern already has a method (FormatPartial) designed for calling one formatter within another. In the very few places where this is required, I’m confident that it’s being called correctly – but I’m not keen on making my code fragile like this. It’s fairly safe from third party intervention – there are very few places (if any) where formatting code calls out to any other code which could include a format call. Just writing this paragraph, I’ve come up with a couple of worrying parts (fetching values from a CultureInfo, and fetching the ID of a time zone) but I think they’re safe… the question is whether I’ve missed something similar in another case. Either way, it would have to be pretty pathological code to cause a problem, and I consider the chances of this being a problem in the real world to be essentially zero.

However, I still want to guard against accidental re-entrancy within Noda Time itself – so I’ve got a debug-only version of Format which performs a crude re-entrancy check using another [ThreadStatic] variable. (And I’ve got a debug-only test to check that the check works, of course.) So long as my unit tests all still pass in a debug configuration, I’m reasonably confident. (You can see the check and the test in the revision that added this code. It may be gone in the latest version, of course – I’m still on the fence about all of this.)

There’s a second issue with [ThreadStatic] which is its level of support in the PCL. We just recently updated our PCL profiles, and I think using [ThreadStatic] would have been a problem if we’d continued to support Windows Phone 7, as that didn’t include [ThreadStatic] as far as I can tell. We’re fine in the current profile, but this is the first time I’ve knowing added code which prevents us from going back to a WP7-enabled profile.

Option 4: ThreadLocal

ThreadLocal is a more recent solution to the thread-local-storage problem. It’s generally a cleaner solution:

  • The variable doesn’t have to be static
  • You can specify an initialization delegate to be executed on first use in any particular thread
  • It’s clear from the type itself what you’re doing, rather than using an attribute which makes a variable act completely differently from normal

That means we can have one StringBuilder per formatter per thread (if we want; we could still make it a static variable if we wanted exactly one StringBuilder for each thread), and simplify the code within the Format method:

private ThreadLocal<StringBuilder> cachedBuilder =
    new ThreadLocal<StringBuilder>(() => new StringBuilder());

public string Format(TResult value)
{
    StringBuilder builder = cachedBuilder.Value;
    builder.Length = 0;
    // This will call all the actions in the multicast delegate.
    formatActions(value, builder);
    return builder.ToString();
}

However, I haven’t tested this code, as ThreadLocal required .NET 4.0 – whereas Noda Time targets .NET 3.5.

Conclusion

You may have noticed that I haven’t provided any performance figures in this post. That’s deliberate, because the results I’ve seen have had too much variance so far. The basic upshot of this is that I don’t yet have enough evidence of the benefit to justify the complexity – it’s just one StringBuilder per format operation, after all, and there’s plenty of other work going on at the same time. I believe there is a benefit, but it’s pretty marginal – and the level of benefit depends on which of the above options is taken.

I’m intrigued by the first option though, and whether experts would consider it to be safe. I still want to see a really good specification for the .NET memory model (which I know to be stronger than the one in the ECMA CLI specification) – I know it’s something that Microsoft has wanted to get round to writing up, but I don’t know if it’s actually happened… I fully expect to see contradictory opinions in comments :)

New blog hosting

As some of you have noticed (and let me know), my old blog hosting provider recently moved off Community Server to WordPress. I figured that as all the links we being broken anyway, now would be a good time to move off msmvps.com anyway. The old posts are still there, but my blog’s new home is codeblog.jonskeet.uk. Hopefully I’ve fixed (almost) all the internal links from one blog post to another, and Nick Craver has generously agreed to fix up links on Stack Overflow, too. I’ll fix up my web site references when I get the chance, and hopefully things will get back to (mostly) normal as soon as possible. Obviously there’ll be plenty of links elsewhere around the web which I can’t fix, but I suspect I’m my own primary consumer, so to speak.

There are still bound to be teething issues, commenting problems, goodness knows what – but hopefully the blog itself will be in a better state than it was before, overall.

Additionally, I’m hoping to gradually (very gradually) coalesce my online presence around the jonskeet.uk domain. I haven’t set that up at all yet, but that’s the plan.

Apologies for link breakage, and fingers crossed it’ll be relatively smooth sailing from here on.

Micro-optimization: the surprising inefficiency of readonly fields

Introduction

Recently I’ve been optimizing the heck out of Noda Time. Most of the time this has been a case of the normal measurement, find bottlenecks, carefully analyse them, lather, rinse, repeat. Yesterday I had a hunch about a particular cost, and decided to experiment… leading to a surprising optimization.

Noda Time’s core types are mostly value types – date/time values are naturally value types, just as DateTime and DateTimeOffset are in the BCL. Noda Time’s types are a bit bigger than most value types, however – the largest being ZonedDateTime, weighing in at 40 bytes in an x64 CLR at the moment. (I can shrink it down to 32 bytes with a bit of messing around, although it’s not terribly pleasant to do so.) The main reason for the bulk is that we have two reference types involved (the time zone and the calendar system), and in Noda Time 2.0 we’re going to have nanosecond resolution instead of tick resolution (so we need 12 bytes just to store a point in time). While this goes against the Class Library Design Guidelines, it would be odd for the smaller types (LocalDate, LocalTime) to be value types and the larger ones to be reference types. Overall, these still feel like value types.

A lot of these value types are logically composed of each other:

  • A LocalDate is a YearMonthDay and a CalendarSystem reference
  • A LocalDateTime is a LocalDate and a LocalTime
  • An OffsetDateTime is a LocalDateTime and an Offset
  • A ZonedDateTime is an OffsetDateTime and a DateTimeZone reference

This leads to a lot of delegation, potentially – asking a ZonedDateTime for its Year could mean asking the OffsetDateTime, which would ask the LocalDateTime, which would ask the LocalDate, which would ask the YearMonthDay. Very nice from a code reuse point of view, but potentially inefficient due to copying data.

Why would there be data copying involved? Well, that’s where this blog post comes in.

Behaviour of value type member invocations

When an instance member (method or property) belonging to a value type is invoked, the exact behaviour depends on the kind of expression it is called on. From the C# 5 spec, section 7.5.5 (where E is the expression the member M is invoked on, and the type declaring M is a value type):

If E is not classified as a variable, then a temporary local variable of E’s type is created and the value of E is assigned to that variable. E is then reclassified as a reference to that temporary local variable. The temporary variable is accessible as this within M, but not in any other way. Thus, only when E is a true variable is it possible for the caller to observe the changes that M makes to this.

So when is a variable not a variable? When it’s readonly… from section 7.6.4 (emphasis mine) :

If T is a struct-type and I identifies an instance field of that class-type:

  • If E is a value, or if the field is readonly and the reference occurs outside an instance constructor of the struct in which the field is declared, then the result is a value, namely the value of the field I in the struct instance given by E.

(There’s a very similar bullet for T being a class-type; the important part is that the field type is a value type

The upshot is that if you have a method call of:

int result = someField.Foo();

then it’s effectively converted into this:

var tmp = someField;
int result = tmp.Foo();

Now if the type of the field is quite a large value type, but Foo() doesn’t modify the value (which it never does within my value types), that’s performing a copy completely unnecessarily.

To see this in action outside Noda Time, I’ve built a little sample app.

Show me the code!

Our example is a simple 256-bit type, composed of 4 Int64 values. The type itself doesn’t do anything useful – it just holds the four values, and exposes them via properties. We then measure how long it takes to sum the four properties lots of times.

using System;
using System.Diagnostics;

public struct Int256
{
    private readonly long bits0;
    private readonly long bits1;
    private readonly long bits2;
    private readonly long bits3;
    
    public Int256(long bits0, long bits1, long bits2, long bits3)
    {
        this.bits0 = bits0;
        this.bits1 = bits1;
        this.bits2 = bits2;
        this.bits3 = bits3;
    }
    
    public long Bits0 { get { return bits0; } }
    public long Bits1 { get { return bits1; } }
    public long Bits2 { get { return bits2; } }
    public long Bits3 { get { return bits3; } }
}

class Test
{
    private readonly Int256 value;

    public Test()
    {
        value = new Int256(1L, 5L, 10L, 100L);
    }
    
    public long TotalValue 
    { 
        get 
        {
            return value.Bits0 + value.Bits1 + value.Bits2 + value.Bits3; 
        }
    }
    
    public void RunTest()
    {
        // Just make sure it’s JITted…
        var sample = TotalValue;
        Stopwatch sw = Stopwatch.StartNew();
        long total = 0;
        for (int i = 0; i < 1000000000; i++)
        {
            total += TotalValue;
        }
        sw.Stop();
        Console.WriteLine("Total time: {0}ms", sw.ElapsedMilliseconds);
    }
    
    static void Main()
    {
        new Test().RunTest();
    }
}

Building this from the command line with /o+ /debug- and running (in a 64-bit CLR, but no RyuJIT) this takes about 20 seconds to run on my laptop. We can make it much faster with just one small change:

class Test
{
    private Int256 value;

    // Code as before
}

The same test now takes about 4 seconds – a 5-fold speed improvement, just by making a field non-readonly. If we look at the IL for the TotalValue property, the copying becomes obvious. Here it is when the field is readonly:

.method public hidebysig specialname instance int64 
        get_TotalValue() cil managed
{
  // Code size       60 (0x3c)
  .maxstack  2
  .locals init (valuetype Int256 V_0,
           valuetype Int256 V_1,
           valuetype Int256 V_2,
           valuetype Int256 V_3)
  IL_0000:  ldarg.0
  IL_0001:  ldfld      valuetype Int256 Test::’value’
  IL_0006:  stloc.0
  IL_0007:  ldloca.s   V_0
  IL_0009:  call       instance int64 Int256::get_Bits0()
  IL_000e:  ldarg.0
  IL_000f:  ldfld      valuetype Int256 Test::’value’
  IL_0014:  stloc.1
  IL_0015:  ldloca.s   V_1
  IL_0017:  call       instance int64 Int256::get_Bits1()
  IL_001c:  add
  IL_001d:  ldarg.0
  IL_001e:  ldfld      valuetype Int256 Test::’value’
  IL_0023:  stloc.2
  IL_0024:  ldloca.s   V_2
  IL_0026:  call       instance int64 Int256::get_Bits2()
  IL_002b:  add
  IL_002c:  ldarg.0
  IL_002d:  ldfld      valuetype Int256 Test::’value’
  IL_0032:  stloc.3
  IL_0033:  ldloca.s   V_3
  IL_0035:  call       instance int64 Int256::get_Bits3()
  IL_003a:  add
  IL_003b:  ret
} // end of method Test::get_TotalValue

And here it is when the field’s not readonly:

.method public hidebysig specialname instance int64 
        get_TotalValue() cil managed
{
  // Code size       48 (0x30)
  .maxstack  8
  IL_0000:  ldarg.0
  IL_0001:  ldflda     valuetype Int256 Test::’value’
  IL_0006:  call       instance int64 Int256::get_Bits0()
  IL_000b:  ldarg.0
  IL_000c:  ldflda     valuetype Int256 Test::’value’
  IL_0011:  call       instance int64 Int256::get_Bits1()
  IL_0016:  add
  IL_0017:  ldarg.0
  IL_0018:  ldflda     valuetype Int256 Test::’value’
  IL_001d:  call       instance int64 Int256::get_Bits2()
  IL_0022:  add
  IL_0023:  ldarg.0
  IL_0024:  ldflda     valuetype Int256 Test::’value’
  IL_0029:  call       instance int64 Int256::get_Bits3()
  IL_002e:  add
  IL_002f:  ret
} // end of method Test::get_TotalValue

Note that it’s still loading the field address (ldflda) four times. You might expect that copying the field onto the stack once via a temporary variable would be faster, but that ends up at about 6.5 seconds on my machine.

There is an optimization which is even faster – moving the totalling property into Int256. That way (with the non-readonly field, still) the total time is less than a second – twenty times faster than the original code!

Conclusion

This isn’t an optimization I’d recommend in general. Most code really doesn’t need to be micro-optimized this hard, and most code doesn’t deal with large value types like the ones in Noda Time. However, I regard Noda Time as a sort of "system level" library, and I don’t ever want someone to decide not to use it on  performance grounds. My benchmarks show that for potentially-frequently-called operations (such as the properties on ZonedDateTime) it really does make a difference, so I’m going to go for it.

I intend to apply a custom attribute to each of these "would normally be readonly" fields to document the intended behaviour of the field – and then when Roslyn is fully released, I’ll probably write a test to validate that all of these fields would still compile if the field were made readonly (e.g. that they’re never assigned to outside the constructor).

Aside from anything else, I find the subtle difference in behaviour between a readonly field and a read/write field fascinating… it’s something I’d been vaguely aware of in the past, but this is the first time that it’s had a practical impact on me. Maybe it’ll never make any difference to your code… but it’s probably worth being aware of anyway.

Anti-pattern: parallel collections

(Note that I’m not talking about "processing collections in parallel, which is definitely not an anti-pattern…)

I figured it was worth starting to blog about anti-patterns I see frequently on Stack Overflow. I realize that some or all of these patterns may be collected elsewhere, but it never hurts to express such things yourself… it’s a good way of internalizing information, aside from anything else. I don’t guarantee that my style of presenting these will stay consistent, but I’ll do what I can…

The anti-patterns themselves are likely to be somewhat language-agnostic, or at the very least common between Java and C#. I’m likely to post code samples in C#, but I don’t expect it to be much of a hindrance to anyone coming from a background in a similar language.

Context

You have related pieces of data about each of several items, and want to keep this data in memory. For example, you’re writing a game and have multiple players, each with a name, score and health.

Anti-pattern

Each kind of data is stored (all the names, all the scores, all the health values) in a separate collection. Typically I see this with arrays. Then each time you need to access related values, you need to make sure you’re using the same index for each collection.

So the code might look like this:

class Game 

    string[] names;
    int[] scores;
    int[] health;

    public Game() 
    { 
        // Initialize all the values appropriately
    } 

    public void PrintScores() 
    { 
        for (int i = 0; i < names.Length; i++) 
        { 
            Console.WriteLine("{0}: {1}", names[i], scores[i]); 
        } 
    } 
}

Preferred approach

The code above fails to represent an entity which seems pretty obvious when you look at the description of the data: a player. Whenever you find yourself describing pieces of data which are closely related, you should make sure you have some kind of representation of that in your code. (In some cases an anonymous type is okay, but often you’ll want a separate named class.)

Once you’ve got that type, you can use a single collection, which makes the code much cleaner to work with.

class Player
{
    public string Name { get; private set; }
    public int Score { get; set; }
    public int Health { get; set; }

    public Player(string name, int health)
    {
        Name = name;
        Health = health;
    }

    // Consider other player-specific operations
}

class Game
{
    List<Player> players;

    public Game()
    {
        // Initialize players appropriately
    }

    public void PrintScores()
    {
        foreach (var player in players)
        {
            Console.WriteLine("{0}: {1}", player.Name, player.Score);
        }
    }
}

Note how we can now use a foreach loop to iterate over our players, because we don’t care need to use the same index for two different collections.

Once you perform this sort of refactoring, you may well find that there are other operations within the Game class which would be better off in the Player class. For example, if you also had a Level property, and increasing that would automatically increase a player’s health and score, then it makes much more sense for that "level up" operation to be in Player than in Game. Without the Player concept, you’d have nowhere else to put the code, but once you’ve identified the relationship between the values, it becomes much simpler to work with.

It’s also much easier to modify a single collection than multiple ones. For example, if we wanted to add or remove a player, we now just need to make a change to a single collection, instead of making sure we perform the same operation to each "single value" collection in the original code. This may sound like a small deal, but it’s easy to make a mistake and miss out on one of the collections somewhere. Likewise if you need to add another related value – like the "level" value described above – it’s much easier to add that in one place than adding another collection and then making sure you do the right thing in every piece of code which changes any of the other collections.

Summary

Any time you find yourself with multiple collections sharing the same keys (whether those are simple list indexes or dictionary keys), think about whether you could have a single collection of a type which composes the values stored in each of the original collections. As well as making it easier to handle the collection data, you may find the new type allows you to encapsulate other operations more sensibly.

Update: what about performance?

As some readers have noted, this transformation can have an impact on performance. Originally, all the scores were kept close together in memory, all the health etc. If you perform a bulk operation on the scores (finding the average score, for example) that locality of reference can have a significant impact. In some cases that may be enough justification to use the parallel collections instead… but this should be a conscious decision, having weighed up the pros and cons and measured the performance impact. Even then, I’d be tempted to encapsulate that PlayerCollection in a separate type, allowing it to implement IEnumerable<Player> where useful. (If you wanted the Player to be mutable, you’d need it to be aware of PlayerCollection itself.)

In almost all these anti-patterns, there will be cases where they’re the lesser of two evils – but novice developers need to be aware of them as anti-patterns to start with. As ever with performance trade-offs, I believe in first deciding on concrete performance goals, then implementing the code in the simplest possible way that meets the non-performance goals, measuring against the performance goals, and tweaking if necessary to achieve them, relying heavily on measurement.

Diagnosing Portable Class Libraries

I’ve always found Portable Class Library (PCL) configuration to be a bit of a mystery. In simple cases, it’s simple: start a new PCL project in Visual Studio, select the environments you want to support, and away you go. But what’s going on under the hood, and what do all the options mean? How do I do this without Visual Studio?

Background: supporting Universal Windows Applications in Noda Time

Recently, I had a feature request for Noda Time to support Windows Phone 8.1 and Universal Windows Applications. This is something I haven’t done before, but it doesn’t sound too hard – it’s just a matter of changing an option in Visual Studio, surely… Unfortunately it’s not that simple for Noda Time. In order to support .NET 3.5 (and some other features which aren’t available in PCLs) the core NodaTime.dll project has multiple configurations which effectively split into "Desktop" and "PCL". This has been achieved by hacking around with the .csproj file manually – nothing particularly clever, but the sort of thing which can’t be done directly in Visual Studio. This means that when I open the project properties in Visual Studio – even when I’m building a PCL version – it treats the project as a regular class library. (It confuses the heck out of ReSharper too in some cases, unfortunately.)

That’s not too hard to work around – I just need to know the name of the profile I want to support. A profile name is basically a single identifier which refers to the configuration of environments you want to support. Unfortunately, it’s pretty opaque – Noda Time 1.2 ships with "Profile2" which means ".NET 4.0, Windows 8 (store apps), Silverlight 4, Windows Phone 7. The simplest way to work out what profile I wanted was to create a new "regular" PCL in Visual Studio, unload the project, manually set the profile to Profile2 in the project file, reload the project, and then enable "Windows Phone 8.1".

That then led to the next problem: Visual Studio 2013 (update 2) wanted to upgrade that sample project… because Profile2 is no longer really an option. You can’t manually set that combination in Visual Studio any more – it doesn’t display anything earlier than Silverlight 5 and Windows Phone 8. I now have two problems:

  • Given that Visual Studio changes the project file for me with a warning when I open it, how can I check what I’ve actually built against? Does msbuild silently do the same thing?
  • How can I find out what profiles are available, in order to select the most appropriate one?

Time to do some research…

Detecting a profile in a built assembly

First things first: if someone gives you a DLL, how can you detect what profile it was built against, and therefore which environments it can run in?

After hunting around in vain for a while, Petr K helped me out on Twitter: the crucial information is in the TargetFrameworkAttribute and its FrameworkName property, which is something like ".NETPortable,Version=v4.0,Profile=Profile2". That can then be used with the FrameworkName class to parse out the profile name without writing any string manipulation code.

Then you just need to find the details of the profile…

Where are all the profiles?

Windows stores the profiles along with the reference assembles, in the .NETPortable framework directory – on my machine for example, that’s "C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortable". There are then subdirectories v4.0, v4.5 and v4.6 – although it’s not 100% clear what those refer to. Within each of those, there’s a Profile directory, which contains a bunch of Profile[n] directories. Then under each Profile[n] directory, a SupportedFrameworks directory contains XML documents describing the supported frameworks for that profile. Phew! So as an example:

C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortablev4.0ProfileProfile2

contains the profile information for Profile2, including files:

  • C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortablev4.0ProfileProfile2SupportedFrameworks.NET Framework 4.xml
  • C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortablev4.0ProfileProfile2SupportedFrameworksSilverlight.xml
  • C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortablev4.0ProfileProfile2SupportedFrameworksWindows 8.xml
  • C:Program Files (x86)Reference AssembliesMicrosoftFramework.NETPortablev4.0ProfileProfile2SupportedFrameworksWindows Phone Silverlight 7.xml

Each of those XML files consists of a single Framework element, with a variety of attributes. Not all elements have the same set of attributes, although everything has an Identifier, Family, Profile, MinimumVersion and DisplayName as far as I’ve seen. Here’s an example – the "Windows Phone Silverlight 7.xml" file listed above:

<?xml version="1.0" encoding="utf-8"?>
<Framework
    Identifier="Silverlight"
    Profile="WindowsPhone*"
    MinimumVersion="4.0"
    Family="WindowsPhone"
    MinimumVisualStudioVersion="10.0"
    MaximumVisualStudioVersion="11.0"
    DisplayName="Windows Phone Silverlight"
    MinimumVersionDisplayName="7"
    PlatformArchitectures="AnyCPU" />

As noted by David Kean in comments, the "Identifier", "Profile" and "MinimumVersion" are mapped to the corresponding <TargetFramework*> properties in MSBuild; the combination of the three is unique for a target, but you need to be careful to look at all the relevant information… don’t just look at the Identifier attribute. Additionally, if there’s a MinimumVersionDisplayName, that’s probably the version you’ll be more familiar with – so in the example above, the displayed minimum version number is 7, which is what people think about for Windows Phone versions, even though the real minimum version number is 4.0 as far as the build is concerned.

Read David’s comment for more details – I don’t want to paraphrase it too much, in case I accidentally write something misleading.

What about the Xamarin frameworks?

As you can see above, Profile2 doesn’t include anything that looks like it’s a Xamarin framework. (Compare that with Profile136 for example, which includes Xamarin.iOS.xml and Xamarin.Android.xml.) But haven’t I previously said that Noda Time 1.2 works seamlessly in Xamarin.iOS and Xamarin.Android? Indeed I have. This is all very confusing, but as far as I’m aware, the Xamarin.* frameworks are only required if you want to build within Xamarin Studio and you’re not on a Windows machine. I could be wrong about that, but certainly there are some profiles which will work with some versions of Xamarin without a Xamarin-specific framework being on the list.

I would welcome more clarity on this, and if I learn any more about it (either in comments here or via other channels) I’ll update this post.

Where should the files go in NuGet packages?

The next issue to consider is that of NuGet packages. A NuGet package can contain multiple binaries for multiple target profiles – for example, the NodaTime NuGet package includes both the desktop and PCL versions. NuGet adds a reference to the right one when you install a package. How can it tell which binary is which? Through a directory structure. Here’s a piece of Noda Time’s nuspec package:

<files>
  <file src="binSigned ReleaseNodaTime.dll" target="libnet35-Client" />
  <file src="binSigned ReleaseNodaTime.pdb" target="libnet35-Client" />
  <file src="binSigned ReleaseNodaTime.xml" target="libnet35-Client" />
  <file src="binSigned Release PortableNodaTime.dll" target="libportable-win+net40+sl4+wp7" />
  <file src="binSigned Release PortableNodaTime.pdb" target="libportable-win+net40+sl4+wp7" />
  <file src="binSigned Release PortableNodaTime.xml" target="libportable-win+net40+sl4+wp7" />
  <file src="***.cs" target="src" />
</files>

As you can see, the PCL files end up in libportable-win+net40+sl4+wp7. That works for Profile2, but it’s not the right directory for the profile I’ll end up using. So how do I work out what directory to use in the next version? I can look at the NuGet source code, or this handy table of profiles and corresponding NuGet targets. (Interestingly, that lists the directory for Profile2 as portable-net4+sl4+netcore45+wp7, so I’m not entirely sure what’s going on. I suspect the table is more reliable than whenever I got the name from when preparing a Noda Time release.)

This all sounds a bit fiddly. Shouldn’t there be a tool to make it simple?

Yes, yes there should. As it happens, Stephen Cleary has already written one, although I didn’t realise this until I’d also written one myself. Indeed, the table linked above is the output from Stephen’s tool.

My own tool is called PclPal, and the source is available on GitHub. Don’t expect polished code (or tests) – this was very much a quick and dirty hack to make life slightly easier. It could be the basis of a more polished tool – possibly even a "PCL profile explorer" or something similar. Oh, and it requires the Roslyn End User Preview.

Currently it’s just a command line tool with three options:

  • "list" – lists the profiles on your system
  • "dll <filename>" – shows the details of the profile a particular assembly was built against
  • "show <profile name>" – shows the details of a specify profile

Note that currently I’m not doing anything in terms of NuGet package directories, although that’s probably the next step, if I take this any further. (I may just use Stephen’s tool, of course…)

Conclusion

The world of PCLs is a bit of a mess, to say the least. It’s possible that there’s lots of really clear, up-to-date documentation explaining everything somewhere – but I have yet to find it. I suspect the main problem is that this is a moving target in all kinds of ways. Versioning is never easy, and between all the different variations possible here, I’m not entirely surprised it’s confusing.

Hopefully this post will help to shed a bit more light on things – please feel very free to leave corrections, as I definitely don’t want to propagate any falsehoods. I’ve definitely learned a thing or two in the last few days, and with tools such as PclPal and Stephen’s PortableLibraryProfiles, life should be that little bit easier…

Follow

Get every new post delivered to your Inbox.

Join 75 other followers