Corner cases in Java and C#

Every language has a few interesting corner cases – bits of surprising behaviour
which can catch you out if you’re unlucky. I’m not talking about the kind of thing
that all developers should really be aware of – the inefficiencies of repeatedly
concatenating strings, etc. I’m talking about things which you would never suspect
until you bump into them. Both C#/.NET and Java have some oddities in this respect,
and as most are understandable even to a developer who is used to the other, I thought
I’d lump them together.

Interned boxing – Java 1.5

Java 1.5 introduced autoboxing of primitive types – something .NET has had
from the start. In Java, however, there’s a slight difference – the boxed
types have been available for a long time, and are proper named reference
types just as you’d write elsewhere. In this example, we’ll look at
int boxing to java.lang.Integer. What would you
expect the results of the following operation to be?

Object x = 5;
Object y = 5;
boolean equality = (x==y);

Personally, I’d expect the answer to be false. We’re testing for reference equality
here, after all – and when you box two values, they’ll end up in different boxes,
even if the values are the same, right? Wrong. Java 1.5 (or rather, Sun’s current
implementation of Java 1.5) has a sort of cache of interned values between -128 and
127 inclusive. The language specification explicitly states that programmers shouldn’t
rely on two boxed values of the same original value being different (or being the
same, of course). Goodness only knows whether or not this actually yields performance
improvements in real life, but it can certainly cause confusion. I only ran into it
when I had a unit test which incorrectly asserted reference equality rather than
value equality between two boxed values. The tests worked for ages, until I added
something which took the value I needed to test against above 127.

Lazy initialisation and the static constructor – C#

One of the things which is sometimes important about the pattern I usually use when
implementing a singleton
is that it’s only initialised when it’s first used – or is it? After a newsgroup
question asked why the supposedly lazy pattern wasn’t working, I investigated a little,
finding out that there’s a big difference between using an initialiser directly on
the static field declaration, and creating a static constructor which assigns the value.
Full details on my beforefieldinit

The old new object – .NET

I always believed that using new with a reference type would give me
a reference to a brand new object. Not quite so – the overload for the String
constructor which takes a char[] as its single parameter will return
String.Empty if you pass it an empty array. Strange but true.

When is == not reflexive? – .NET

Floating point numbers have been
the cause of many headaches over the years. It’s relatively well known that “not a number” is not equal
to itself (i.e. if x=double.NaN, then x==x is false).

It’s slightly more surprising when two values which look like they really, really should be equal just
aren’t. Here are a couple of sample programs:

using System;
public class Oddity1
    public static void Main()
        double two = double.Parse("2");
        double a = double.Epsilon/two;
        double b = 0;
        Console.WriteLine(Math.Abs(b-a) < double.Epsilon);

On my computer, the above (compiled and run from the command line) prints out True twice.
If you comment out the last line, however, it prints False – but only under .NET 1.1.
Here’s another:

using System;

class Oddity2
    static float member;

    static void Main()
        member = Calc();
        float local = Calc();
        member = local;

    static float Calc()
        float f1 = 2.82323f;
        float f2 = 2.3f;
        return f1*f2;

This time it prints out True until you comment out the last
line, which changes the result to False. This occurs on both .NET 1.1 and 2.0.

The reason for these problems is really the same – it’s a case of when the JIT decides to
truncate the result down to the right number of bits. Most CPUs work on 80-bit floating point
values natively, and provide ways of converting to and from 32 and 64 bit values. Now, if you
compare a value which has been calculated in 80 bits without truncation with a value which has
been calculated in 80 bits, truncated to 32 or 64, and then expanded to 80 again, you can run
into problems. The act of commenting or uncommenting the extra lines in the above changes what
the JIT is allowed to do at what point, hence the change in behaviour. Hopefully this will
persuade you that comparing floating point values directly isn’t a good idea, even in cases
which look safe.

That’s all I can think of for the moment, but I’ll blog some more examples as and when I see/remember
them. If you enjoy this kind of thing, you’d probably like
Java Puzzlers
– whether or not you use Java itself. (A lot of the puzzles there map directly to C#, and even those which
don’t are worth looking at just for getting into the mindset which spots that kind of thing.)

A short case study in LINQ efficiency

I’ve been thinking a bit about how I’d use LINQ in real life (leaving DLinq and XLinq alone for the moment). One of the examples I came up with is a fairly common one – trying to find the element in a collection which has the maximum value for a certain property. Note that quite often I don’t just need to know the maximum value of the property itself – I need to know which element had that value. Now, it’s not at all hard to implement that in “normal” code, but using LINQ could potentially make the intention clearer. So, I tried to work out what the appropriate LINQ expression should look like.

I’ve come up with three ways of expressing what I’m interested in in LINQ. For these examples, I’ve created a type NameAndSize which has (unsurprisingly) properties Name (a string) and Size (an int). For testing purposes, I’ve created a list of these items, with a variable list storing a reference to the list. All samples assume that the list is non-empty.

Sort, and take first element

(from item in list
orderby item.Size descending
select item).First();

This orders by size descending, and takes the first element. The obvious disadvantage of this is that before finding the first element (which is the only one we care about) we have to sort all the elements – nasty! Assuming a reasonable sort, this operation is likely to be O(n log (n))

Subselect in where clause

(from item in list
where item.Size==list.Max(x=>x.Size)
select item).First();

This goes through the list, finding every element whose size is equal to the maximum one, and then takes the first of those elements. Unfortunately, the comparison calculates the maximum size on every iteration This makes it an O(n^2) operation.

Two selects

int maxSize = list.Max(x=>x.Size);
NameAndSize max = (from item in list
where item.Size==maxSize
select item).First();

This is similar to the previous version, but solves the problem of the repeated calculation of the maximum size by doing it before anything else. This makes the whole operation O(n), but it’s still somewhat dissatisfying, as we’re having to iterate through the list twice.

The non-LINQ way

NameAndSize max = list[0];
foreach (NameAndSize item in list)
if (item.Size > max.Size)
max = item;

This keeps a reference to the “maximum element so far”. It only iterates through the list once, and is still O(n).


Now, I’ve written a little benchmark which runs all of these except the “embedded select” version which was just too slow to run sensibly by the time I’d made the list large enough to get sensible results for the other versions. Here are the results using a list of a million elements, averaged over five runs:
Sorting: 437ms
Two queries: 109ms
Non-LINQ: 38ms

After tripling the size of the list, the results were:
Sorting: 1437ms
Two queries: 324ms
Non-LINQ: 117ms

These results show the complexities being roughly as predicted above, and in particular show that it’s definitely cheaper to only iterate through the collection once than to iterate through it twice.

Now, this query is a fairly simple one, conceptually – it would be a shame if LINQ couldn’t cope with it efficiently. I suspect it could be solved by giving the Max operator another parameter, which specified what should be selected, as well as what should be used for comparisons. Then I could just use list.Max(item => item.Size, item=>item). At that stage, the only loss in efficiency would be through invoking the delegates, which is a second-order problem (and one which is inherent in LINQ). Fortunately, the way LINQ works makes this really easy to try out – just write an extension class:

static class Extensions
    public static V Max<T,V>(this IEnumerable<T> source, 
                             Func<T,int> comparisonMap, 
                             Func<T,V> selectMap)
        int maxValue=0;
        V maxElement=default(V);
        bool gotAny = false;
        using (IEnumerator<T> enumerator = source.GetEnumerator())
            while (enumerator.MoveNext())
                T sourceValue = enumerator.Current;
                int value = comparisonMap(sourceValue);
                if (!gotAny || value > maxValue)
                    maxValue = value;
                    maxElement = selectMap(sourceValue);
                    gotAny = true;
        if (!gotAny)
            throw new EmptySequenceException();
        return maxElement;

This gave results of 57ms and 169ms for the two list sizes used earlier – not quite as fast as the non-LINQ way, but much better than any of the others – and by far the simplest to express, too.

Lessons learned

  • You really need to think about the complexity of LINQ queries, and know where they will be executed (I suspect that DLinq would have coped with the “subselect” version admirably).
  • There’s still some more work to be done on the standard query operators to find efficient solutions to common use cases.
  • Even if the standard query operators don’t quite do what you want, it can be worthwhile to implement your own – and it’s not that hard to do so!

Problems posting comments?

Some of you may have been frustrated by how hard it is to post comments to this blog – the human verification step seems to be a bit off. I’m trying to find out what’s going wrong, but until it’s fixed, please just mail me with the comment, including your name and the optional URL you want on the comment, and I’ll post it as soon as I get time. I’d far rather take the time to do that than lose the comments!

Nasty generics restrictions

So, I caved and finally downloaded the LINQ preview. Obviously it’s fairly heavily genericised (if that’s even a word) and I decided to push it a little. Nothing particularly heavy – just an interesting bit of functional programming.

It’s easy to do a query which returns a string property from an object:

var items = new[] 
    new {Data = "First"},
    new {Data = "Second"},
    new {Data = "Third"}

IEnumerable<string> query = from item in items
                            select item.Data;

foreach (string s in query)
    Console.WriteLine (s);

The above prints out:


It’s not particularly hard to return a string from an object, having performed an operation on that string, where the operation is also specified by
the object:

var items = new[] 
    new {Data = "First", Operation = (Func<string,string>) (s => s+s)},
    new {Data = "Second", Operation = (Func<string,string>) (s => s.Substring(1))},
    new {Data = "Third", Operation = (Func<string,string>) (s => s)}

IEnumerable<string> query = from item in items
                            select item.Operation(item.Data);

foreach (string s in query)
    Console.WriteLine (s);

The first operation is plain concatenation; the second takes the first letter off the string, and the third is the identity operator. The above prints out:


So, the next thing I wanted to try was performing the operation twice. using the output of the first as the input of the second. I could have just used item.Operation(item.Operation(item.Data)) but where would the fun be in that? Instead, I wanted to have an operator whose parameters were a transformation from one type to the same type and an initial value, returning the same type. I’d hoped it would be as simple as Func<Func<T,T>,T,T> doItTwice = (op, input) => op(op(input));. After all, the implementation we’ve given doesn’t care in the slightest what the type involved is. Unfortunately, .NET generics don’t allow that kind of thing as far as I can tell – because it doesn’t know whether T is going to be a reference type or a value type, it can’t create the appropriate concrete implementation. Changing the declaration to use string instead of T everywhere works fine, but it’s much less elegant.

The interesting thing is that (lambda functions aside) I believe that would be possible in Java. Java generics are much weaker in terms of implementation, but allow some things to be expressed which you just can’t do in .NET. (The reverse is true too, of course, partly because .NET generics can involve value types and Java generics can’t. So, to put on my best whiny voice – why can’t we have the best of both worlds? I understand that it probably makes things a bit harder in terms of implementation, but come on, that ‘s the kind of thing which only has to be worked out once, by one team – whereas hundreds of thousands of developers are going to be actually using it.

The good news is that playing with lambda functions is still fun, just like I expected it to be.

Improvement to extension method syntax

I’m blogging this before my sleep-deprived mind (3 hours in the last 33, and I’m getting up for the day in an hour or so) loses it.

One of the things I don’t like about the proposed extension methods is the way the compiler is made aware of them – on a namespace basis. “Using” directives are very common to add for any namespace used in a class, and quite often I wouldn’t want the extension methods within that namespace to be applied. I propose using using static System.Query.Sequence; instead, in a way that is analogous to the static imports of Java 1.5 (except without importing single members or requiring the “.*” part. This would make it clearer what you were actually trying to do.

LINQ, DLinq, XLinq and C# 3.0

Some readers may already be aware of Project LINQ – .NET Language Integrated Query. There have been various posts about it (and in particular the effect it has on C# itself) on the C# newsgroup, and many of those have involved a certain amount of speculation. This is understandable, as it’s still very much under development, and although a sample initial implementation has been available since the PDC, I’d be surprised if things didn’t change.

I haven’t downloaded the PDC implementation, and don’t intend to for a while. I have, however, finally had a chance to read the specs and overviews of LINQ, DLinq, XLinq and C# 3.0. I haven’t pored over them, and again don’t intend to just yet – I believe that features like these are best investigated in anger. Once I have a use for them, I’ll probably look more closely at them. However, here is my (literally – I’m typing in a plane here) 34,000 foot view of what’s going on. This won’t include any code samples, not even ones copied from the documentation, so you may well find it useful to read the Microsoft documents before proceeding much further.

The basic premise

Imperative programming languages such as C, C++, Java, C#, VB and VB.NET have traditionally made it hard to work with relational databases, and made it possibly even harder to work with XML. The latter should come as a surprise in a way – XML was designed much later than SQL, and should have benefitted from a lot of hindsight in terms of technology design. Don’t get me wrong – I’m not anti-XML per se, but the APIs for working with it have generally sucked, particularly the W3C ones. There are half-decent APIs available in .NET, and various open source libraries in Java-land such as Dom4J and JDom which improve the situation, but don’t feel like they’ve quite cracked it.

Both XML and relational databases have what is commonly called an impedance mismatch with object-oriented languages – they just don’t think about things in the same way. Database tables are related to each other rather than entities being composed of each other, and object identity and equality pose really significant problems when trying to map data from one paradigm to the other. Just as with XML manipulation, there have been various attempts to solve (or at least greatly help) the mismatch problem, often with libraries or tools called Object Relational Mappers (ORM). There are many different ORM tools available – probably more for Java than .NET, possibly due to the longer timescale of availability of Java. Beyond the sphere of Java and .NET, I only know of one other ORM tool, which is ActiveRecord for Ruby, usually used with the Rails framework for web applications. I’m sure there are plenty of others available though – no need to comment on my ignorance here!

The Powers-That-Be in the Java world are trying to semi-standardise ORM with the EJB 3.0 specification, which I believe is currently at the public review stage. In theory, this should mean that marking up some Java classes with annotations (attributes to .NET folks) will allow the same classes to be used with multiple ORMs. I suspect that this facility won’t be used for multiple-ORM support very often, as you tend to need to know which ORM you’re targetting for other reasons, but it does mean that the bigwigs of the ORM world have got together to try to work out at least a common way of talking about ORM. I should say at this point that the only ORM I’ve had any real experience with is Hibernate, which is generally excellent, although rough around the edges in a few places. It has a .NET equivalent, NHibernate, which I haven’t used.

So, what does this have to do with LINQ? Well, all of the above projects and tools have a problem – you don’t tend to get much language support for them, as they’ve had to work with the language features available to them, rather than directing the future of the languages they target. LINQ, on the other hand, has added many new features to C# (and VB 9.0) which should greatly add to the potential safety and efficiency of the solution. LINQ is neither XML-specific nor SQL-specific, but instead introduces various language features which target general querying, with DLinq and XLinq hooking those up to SQL and XML respectively. It is worth noting at this point that LINQ itself doesn’t try to be an ORM system at all – only half of DLinq is particularly LINQ-related, and that’s (naturally) the querying side. The update side of things requires no new language features, and looks somewhat like using Hibernate with annotations, at least at first glance.

The language features

So, what new language features does LINQ effectively require? I’m only going to cover the C# side of things here – VB 9.0 has gained some features supporting XLinq (of somewhat dubious value at a first glance, but I’ll let VB fans work out whether they’re actually good or not), but I won’t address how the new VB looks.

  • Lambda expressions: these look pretty powerful and (more importantly) useful – and not just as part of LINQ. Whether expression trees (where a lambda expression ends up as data rather than compiler code) will be good or not (at least outside LINQ, which I suspect pretty much requires them) remains to be seen. Again though, there’s a lot of potential.
  • Extension methods: ouch. I can see why it’s being done, and I can see why it will be potentially very useful, but I suspect it will be abused hideously. The worst thing is, I can see times when I might well abuse it and like it at the time – System.String doesn’t have a method which would be handy? Heck, make it an extension. Fine at write-time, but not so fun at maintenance time. This is just a gut feeling at this stage, but I’m frankly a little worried. If my team were using C# 3.0, I’d want any language extensions to be code-reviewed by two people (other than the original developer) rather than just one (or if pair programming was going on, at least one extra pair of eyes).
  • Object initializers: yes, yes, yes. Great stuff.
  • Anonymous types: these could be interesting. They certainly make sense in LINQ, but they potentially have use outside it too. How many times have you had local variables which were essentially linked to each other, but that relationship could only be expressed through naming? I’m thinking of situations like searching for the minimum element in a collection (which admittedly LINQ would make a piece of cake anyway) – you typically want to remember the minimum value and the index of the element which had that value. Coupling the two into a real type is too much effort, but with an anonymous type? There are possibilities there. I’ll have to see how it reads and how maintainable it is before I can really pass judgement.
  • Implicitly typed arrays: yes and no. Part of me screams that it’ll make things easier – and part screams that it’ll make things harder to read at a glance. I think “use with care” is the watch-phrase here.
  • Implicitly typed local variables: Hmm. Very much “use with care”. I don’t want to see variables being implicitly typed all over the place, as they are in the specifications. They’re obviously necessary for anonymous types, but I’m not sure about their use outside that situation. There’s been a fair amount of discussion about this on the newsgroups, with no clear “winning” side. I think we’re all guessing really – we need to use this in anger, and wait for a year or two to see what the maintenance implications are.
  • Query expressions themselves: not sure. I can see why it’s nice to have them in the language – particularly having gone through an HQL (Hibernate Query Language) run/see error/debug/repeat cycle a few times, but at the same time it feels like it’s going a bit overboard. I think I’ll need to see real examples from real code in both query syntax and “dot notation” (calling normal methods) before making up my mind on this. I should note that this attitude is significantly more “friendly” towards query expressions than my first reactions, which were along the lines of “no, no, no, get that SQL out of my C# code.” That suggests I may well come to like it over time – but I’m not quite there yet.


I’m worried about C# expanding this quickly. I don’t know what the C# 3.0 timeframe is, but C# 2.0 isn’t out of the door yet, and we don’t know how developers are going to cope with generics in the real world yet. Introducing several new and pretty major language features at this stage seems premature – although of course they’re not really being introduced just yet. Java has tended to go in the opposite direction, only allowing change very slowly. This hasn’t always had positive results (there are some aspects of generics in Java which are truly awful compared with .NET generics – although it has its advantages) but it has generally given a lot of time for people to think about things and give feedback. Hopefully the reason for the C# 3.0 draft specs being made available at this stage is to get as much feedback as possible as early as possible.

Having said I’m worried about C# growing too quickly, there are ways in which I wish it would grow which don’t sem to be addressed at all – including ones which are present in prototype form in MS research labs (Spec# in particular). Things I’d like to see:

  • Simpler property definition, for properties which really are just setting variables and returning them – making thread-safety available by specifying a lock to apply would be nice too, if it didn’t interfere with the simplicity too much.
  • Design-by-contract – not so much for the “provability” side of things (which I’m sure would be great too, but which I have no direct experience of) but more for getting rid of all the irritating code I need to write just to verify arguments etc. This is an ideal target for machine-generated code. Proving the result side of the contract would be great too – not just “this parameter must not be null” but “the result of this operation is never null”.
  • Aspect-oriented programming – in a very similar vein to design-by-contract, I’m sure that AOP could have great benefits for cross-cutting concerns, and would work much better as a language construct than in libraries which need to do nasty code manipulation.

I’m sure there are more that I’ve thought of over the years, but these are my biggest gripes at the moment. Compared with the changes which are being made, they’re possibly relatively small, too. You can bet that I’ll be asking the C# team about the possibility of their inclusion while I’m at the MVP summit! (Don’t expect any results to be posted here though – I’m afraid it’ll almost certainly all be under NDA.)


However far away C# 3.0 may be, it has great promise – as well as a few big holes which the over-zealous developer wishing to use new features wherever possible may end up falling into. We’ll see how things shape up over time. My battery is running low, so until I’m near power again, goodbye…

Overuse of regular expressions

I’ve been having a discussion on the C# newsgroup about the best way of searching for multiple strings in a single string.

The question posed was how to find out whether any of the strings “something1”, “something2” and “something3” were contained within another string. The suggested response was to use regular expressions. Personally, I think this is using a sledgehammer to crack a nut – and a dangerous sledgehammer at that.

Now, I want to make it perfectly clear from the start that I have nothing against regular expressions – I think they’re very powerful, and can save a lot of very complex code when they’re used in the right place. I just don’t think this is the right place.

The simplest way (to my mind) of checking for the presence of multiple strings is to use String.IndexOf multiple times. Very simple to follow and easy to modify. It seems a lot less risky to me than checking the regular expression “something1|something2|something3”. (Yes, in this case you could actually use “something[123]” but that’s not actually much easier to read, and I doubt that these are the real strings the original poster had in mind.) The regular expression way will certainly work (and efficiency hasn’t been brought up as a significant issue – I don’t even know which way is faster), but I believe it’s harder to understand and maintain Why? Because you have to know more information to understand it. Not only do you have to have that extra knowledge, but you have to apply it too. The vertical bars in that string have a special meaning – you have to spot that to start with. Then you have to do the splitting up mentally to see what’s actually being searched for (rather than seeing the separate items being searched for as completely separate strings).

Reading that example isn’t too bad – it’s harder than just repeated IndexOf (and thus already not as good a solution in my view), but it’s not awful. However, think of maintenance. Suppose someone wanted to change it to look for “some+thing” – they’d have to know that “+” is a special character too, and how to escape it. Of course, they’d need to know how to escape “” in C# when using IndexOf, but when using regular expressions you’d need to escape it from the point of view of both C# and the regular expressions.

My contention is that there’s no point in putting this extra burden on the maintenance engineer (whether it happens to be the original author of the code or not) when there’s a simpler solution which doesn’t involve any special meanings for anything in the string beyond what’s mandated by using C# in the first place.

Unfortunately, this argument seems to be hard to accept…

This is far from the first time I’ve seen this urge to use regular expressions for no good reason, unfortunately. If it had been, I probably wouldn’t be blogging about it. (For a really big discussion about it, see this thread in from 2002. Over 1600 posts, and that’s not including those from one of the major posters in the thread!)

A couple of years back there seemed to be an urge in the .NET community to use reflection to solve every problem under the sun in a similar kind of fashion to the urge to use regular expressions here. Like regular expressions, reflection can be very powerful when used carefully to solve problems which would otherwise have very complex solutions, if indeed any solutions at all – but it only needs to be used occasionally.

It would be really nice if all developers considered the people who might have to read their code (and what they might have to do to maintain it) before committing anything to source control. It’s not that I think people can’t understand regular expressions – I just think it’s extra work which can be avoided in many situations (like finding one string inside another, or replacing one substring with another). There’s no benefit to introducing the complexity, so why do it?