Category Archives: Evil code

There’s a hole in my abstraction, dear Liza, dear Liza

I had an interesting day at work today. I thought my code had broken… but it turns out it was just a strange corner case which made it work very slowly. Usually when something interesting happens in my code it’s quite hard to blog about it, because of all the confidentiality issues involved. In this case, it’s extremely easy to reproduce the oddity in an entirely vanilla manner. All we need is the Java collections API.

I have a set – a HashSet, in fact. I want to remove some items from it… and many of the items may well not exist. In fact, in our test case, none of the items in the "removals" collection will be in the original set. This sounds – and indeed is – extremely easy to code. After all, we’ve got Set<T>.removeAll to help us, right?

Let’s make this concrete, and look at a little test. We specify the size of the "source" set and the size of the "removals" collection on the command line, and build both of them. The source set contains only non-negative integers; the removals set contains only negative integers. We measure how long it takes to remove all the elements using System.currentTimeMillis(), which isn’t the world most accurate stopwatch but is more than adequate in this case, as you’ll see. Here’s the code:

import java.util.*;

public class Test
{
    public static void main(String[] args)
    {
        int sourceSize = Integer.parseInt(args[0]);
        int removalsSize = Integer.parseInt(args[1]);
        
        Set<Integer> source = new HashSet<Integer>();
        Collection<Integer> removals = new ArrayList<Integer>();
        
        for (int i = 0; i < sourceSize; i++)
        {
            source.add(i);
        }
        for (int i = 1; i <= removalsSize; i++)
        {
            removals.add(-i);
        }
        
        long start = System.currentTimeMillis();
        source.removeAll(removals);
        long end = System.currentTimeMillis();
        System.out.println("Time taken: " + (end – start) + "ms");
    }
}

Let’s start off by giving it an easy job: a source set of 100 items, and 100 to remove:

c:UsersJonTest>java Test 100 100
Time taken: 1ms

Okay, so we hadn’t expected it to be slow… clearly we can ramp things up a bit. How about a source of one million items1 and 300,000 items to remove?

c:UsersJonTest>java Test 1000000 300000
Time taken: 38ms

Hmm. That still seems pretty speedy. Now I feel I’ve been a little bit cruel, asking it to do all that removing. Let’s make it a bit easier – 300,000 source items and 300,000 removals:

c:UsersJonTest>java Test 300000 300000
Time taken: 178131ms

Excuse me? Nearly three minutes? Yikes! Surely it ought to be easier to remove items from a smaller collection than the one we managed in 38ms? Well, it does all make sense, eventually. HashSet<T> extends AbstractSet<T>, which includes this snippet in its documentation for the removeAll method:

This implementation determines which is the smaller of this set and the specified collection, by invoking the size method on each. If this set has fewer elements, then the implementation iterates over this set, checking each element returned by the iterator in turn to see if it is contained in the specified collection. If it is so contained, it is removed from this set with the iterator’s remove method. If the specified collection has fewer elements, then the implementation iterates over the specified collection, removing from this set each element returned by the iterator, using this set’s remove method.

Now that sounds reasonable on the surface of it – iterate through the smaller collection, check for the presence in the bigger collection. However, this is where the abstraction is leaky. Just because we can ask for the presence of an item in a large collection doesn’t mean it’s going to be fast. In our case, the collections are the same size – but checking for the presence of an item in the HashSet is O(1) whereas checking in the ArrayList is O(N)… whereas the cost of iterating is going to be the same for each collection. Basically by choosing to iterate over the HashSet and check for presence in the ArrayList, we’ve got an O(M * N) solution overall instead of an O(N) solution. Ouch. The removeAll method is making an "optimization" based on assumptions which just aren’t valid in this case.

Then fix it, dear Henry, dear Henry, dear Henry

There are two simple ways of fixing the problem. The first is to simply change the type of the collection we’re removing from. Simply changing ArrayList<Integer> to HashSet<Integer> gets us back down to the 34ms range. We don’t even need to change the declared type of removals.

The second approach is to change the API we use: if we know we want to iterate over removals and perform the lookup in source, that’s easy to do:

for (Integer value : removals)
{
    source.remove(value);
}

In fact, on my machine that performs slightly better than removeAll – it doesn’t need to check the return value of remove on each iteration, which removeAll does in order to return whether or not any items were removed. The above runs in about 28ms. (I’ve tested it with rather larger datasets, and it really is faster than the dual-hash-set approach.)

However, both of these approaches require comments in the source code to explain why we’re not using the most obvious code (a list and removeAll). I can’t complain about the documentation here – it says exactly what it will do. It’s just not obvious that you need to worry about it, until you run into such a problem.

So what should the implementation do? Arguably, it really needs to know what’s cheap in each of the collections it’s dealing with. The idea of probing for performance characteristics before you decide on a strategy is completely anathema to clean abstraction we like to consider with frameworks like Java collections… but maybe in this case it would be a good idea.


1 Please perform Dr Evil impression while reading this. I’m watching you through your webcam, and I’ll be disappointed if I don’t see you put your little finger to your mouth.

“Magic” null argument testing

Warning: here be dragons. I don’t think this is the right way to check for null arguments, but it was an intriguing idea.

Today on Stack Overflow, I answered a question about checking null arguments. The questioner was already using an extension similar to my own one in MiscUtil, allowing code like this:

public void DoSomething(string name)
{
    name.ThrowIfNull("name");

    // Normal code here
}

That’s all very well, but it’s annoying to have to repeat the name part. Now in an ideal world, I’d say it would be nice to add an attribute to the parameter and have the check performed automatically (and when PostSharp works with .NET 4.0, I’m going to give that a go, mixing Code Contracts and AOP…) – but for the moment, how far can we go with extension methods?

I stand by my answer from that question – the code above is the simplest way to achieve the goal for the moment… but another answer raised the interesting prospect of combining anonymous types, extension methods, generics, reflection and manually-created expression trees. Now that’s a recipe for hideous code… but it actually works.

The idea is to allow code like this:

public void DoSomething(string name, string canBeNull, int foo, Stream input)
{
    new { name, input }.CheckNotNull();

    // Normal code here
}

That should check name and input, in that order, and throw an appropriate ArgumentNullException – including parameter name – if one of them is null. It uses the fact that projection initializers in anonymous types use the primary expression’s name as the property name in the generated type, and the value of that expression ends up in the instance. Therefore, given an instance of the anonymous type initializer like the above, we have both the name and value despite having only typed it in once.

Now obviously this could be done with normal reflection – but that we be slow as heck. No, we want to effectively find the properties once, and generate strongly typed delegates to perform the property access. That sounds like a job for Delegate.CreateDelegate, but it’s not quite that simple… to create the delegate, we’d need to know (at compile time) what the property type is. We could do that with another generic type, but we can do better than that. All we really need to know about the value is whether or not it’s null. So given a "container" type T, we’d like a bunch of delegates, one for each property, returning whether that property is null for a specified instance – i.e. a Func<T, bool>. And how do we build delegates at execution time with custom logic? We use expression trees…

I’ve now implemented this, along with a brief set of unit tests. The irony is that the tests took longer than the implementation (which isn’t very unusual) – and so did writing it up in this blog post. I’m not saying that it couldn’t be improved (and indeed in .NET 4.0 I could probably make the delegate throw the relevant exception itself) but it works! I haven’t benchmarked it, but I’d expect it to be nearly as fast as manual tests – insignificant in methods that do real work. (The same wouldn’t be true using reflection every time, of course.)

The full project including test cases is now available, but here’s the (almost completely uncommented) "production" code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Reflection;
using System.Linq.Expressions;

public static class Extensions
{
    public static void CheckNotNull<T>(this T container) where T : class
    {
        if (container == null)
        {
            throw new ArgumentNullException("container");
        }
        NullChecker<T>.Check(container);
    }

    private static class NullChecker<T> where T : class
    {
        private static readonly List<Func<T, bool>> checkers;
        private static readonly List<string> names;

        static NullChecker()
        {
            checkers = new List<Func<T, bool>>();
            names = new List<string>();
            // We can’t rely on the order of the properties, but we
            // can rely on the order of the constructor parameters
            // in an anonymous type – and that there’ll only be
            // one constructor.
            foreach (string name in typeof(T).GetConstructors()[0]
                                             .GetParameters()
                                             .Select(p => p.Name))
            {
                names.Add(name);
                PropertyInfo property = typeof(T).GetProperty(name);
                // I’ve omitted a lot of error checking, but here’s
                // at least one bit…
                if (property.PropertyType.IsValueType)
                {
                    throw new ArgumentException
                        ("Property " + property + " is a value type");
                }
                ParameterExpression param = Expression.Parameter(typeof(T), "container");
                Expression propertyAccess = Expression.Property(param, property);
                Expression nullValue = Expression.Constant(null, property.PropertyType);
                Expression equality = Expression.Equal(propertyAccess, nullValue);
                var lambda = Expression.Lambda<Func<T, bool>>(equality, param);
                checkers.Add(lambda.Compile());
            }
        }

        internal static void Check(T item)
        {
            for (int i = 0; i < checkers.Count; i++)
            {
                if (checkers[i](item))
                {
                    throw new ArgumentNullException(names[i]);
                }
            }
        }
    }
}

Oh, and just as a miracle – the expression tree worked first time. I’m no Marc Gravell, but I’m clearly improving :)

Update: Marc Gravell pointed out that the order of the results of Type.GetProperties isn’t guaranteed – something I should have remembered myself. However, the order of the constructor parameters will be the same as in the anonymous type initialization expression, so I’ve updated the code above to reflect that. Marc also showed how it could almost all be put into a single expression tree which returns either null (for no error) or the name of the "failing" parameter. Very clever :)

API design: choosing between non-ideal options

So, UnconstrainedMelody is coming on quite nicely. It now has quite a few useful options for flags enums, "normal enums" and delegates. However, there are two conflicting limitations which leave a couple of options. (Other related answers on Stack Overflow have suggested alternative approaches, basically.)

Currently, most of the enums code is in two classes: Flags and Enums. Both are non-generic: the methods within them are generic methods, so they have type parameters (and constraints). The main benefit of this is that generic type inference only applies to generic methods, and I definitely want that for extension methods and anywhere else it makes sense.

The drawback is that properties can’t be generic. That means my API is entirely expressed in terms of methods, which can be a pain. The option to work around this is to have a generic type which properties in. This adds confusion and guesswork – what call is where?

To recap, the options are:

// Option 1 (current): all methods in a nongeneric class:
// Some calls which are logically properties end up
// as methods…
IList<Foo> foos = Enums.GetValues<Foo>();
// Type infererence for extenion methods
// Note that we couldn’t have a Description property
// as we don’t have extension properties
string firstDescription = foos[0].GetDescription();
        
// Option 2: Use just a generic type:
// Now we can use a property…
IList<Foo> foos = Enums<Foo>.Values;
// But we can’t use type inference
string firstDescription = Enums<Foo>.GetDescription(foos[0]);
        
// Option 3: Use a mixture (Enums and Enums<T>):
IList<Foo> foos = Enums<Foo>.Values;
// All looks good…
string firstDescription = foos[0].GetDescription();
// … but the user has to know when to use which class

All of these are somewhat annoying. If we only put extension methods into the nongeneric class, then I guess users would never need to really think about that – they’d pretty much always be calling the methods via the extension method syntactic sugar anyway. It still feels like a pretty arbitrary split though.

Any thoughts? Which is more important – conceptual complexity, or the idiomatic client code you end up with once that complexity has been mastered? Is it reasonable to make design decisions like this around what is essentially a single piece of syntactic sugar (extension methods)?

(By the way, if anyone ever wanted justification for extension properties, I think this is a good example… Description feels like it really should be a property.)

Evil Code of the Day: variance and overloading

(Note that this kind of breakage was mentioned a long time ago in Eric Lippert’s blog, although not in this exact form.)

Whenever a conversion becomes available where it wasn’t before, overload resolution can change its behaviour. From C# 1 to C# 2 this happened due to delegate variance with method group conversions – now the same thing is true for generic variance for interfaces.

What does the following code print?

using System;
using System.Collections.Generic;

class Base
{
    public void Foo(IEnumerable<string> strings)
    {
        Console.WriteLine(“Strings”);
    }
}

class Derived : Base
{
    public void Foo(IEnumerable<object> objects)
    {
        Console.WriteLine(“Objects”);
    }
}

class Test
{
    static void Main()
    {
        List<string> strings = new List<string>();
        new Derived().Foo(strings);
    }
}

The correct answer is “it depends on which version of C# and .NET framework you’re using.”

If you’re using C# 4.0 and .NET 4.0, then IEnumerable<T> is covariant: there’s an implicit conversion from IEnumerable<string> to IEnumerable<object>, so the derived overload is used.

If you’re using C# 4.0 but .NET 3.5 or earlier then the compiler still knows about variance in general, but the interface in the framework doesn’t have the appropriate metadata to indicate it, so there’s no conversion available, and the base class overload is used.

If you’re using C# 3.0 or earlier then the compiler doesn’t know about generic variance at all, so again the base class overload is used.

So, this is a breaking change, and a fairly subtle one at that – and unlike the method group conversion in .NET 2.0, the compiler in .NET 4.0 beta 1 doesn’t issue a warning about it. I’ll edit this post when there’s an appropriate Connect ticket about it…

In general though, I’d say it’s worth avoiding overloading a method declared in a base class unless you really have to. In particular, overloading it using the same number of parameters but more general ones seems to be a recipe for unreadable code.