Category Archives: C#

Extension methods and .NET 2.0

Update: as Daniel Moth has pointed out in the comments, it is possible to use extension methods in Orcas projects targeting .NET 2.0 by introducing your own System.Runtime.CompilerServices.ExtensionAttribute class. See Daniel’s blog post for more details.

One of the neat things about Visual Studio 2008 is that you can target .NET 2.0, 3.0 and 3.5. I’m hoping to start using it professionally shortly after it’s released – assuming stability and performance aren’t an issue, of course. The great thing is that some of the C# 3 features don’t require any framework support at all (and none of them require CLR support, as far as I’m aware). So, I’ll be able to use anonymous types, local variable type inference, object initializers, collection initializers and automatic properties. Sure, there’s no LINQ, but I can take that hit.

But what about extension methods? As far as I can figure out, they only require one feature from the framework: an attribute to decorate a method to say that it’s an extension method. If we could provide that as a compile-time option, we could use extension methods in .NET 2.0 projects, assuming I haven’t forgotten something important.

Now, I know there’s a lot of controversy about whether extension methods are really a good idea in the first place – but I for one would welcome the opportunity to use them in .NET 2.0 projects. My wild guess is that I won’t be able to use .NET 3.5 until mid-2009, and that a lot of other developers will be in the same boat. I’m sure the transition from 2.0 to 3.5 will be faster than 1.1 to 2.0 – and I doubt that many people will target 3.0 at all – but it’ll still be far from instantaneous.

The obvious downside of this would be everyone creating their own attributes and using them throughout their projects. However, it wouldn’t take long to remove them when the project moved on to .NET 3.5 – delete the attribute and remove the compiler option, then fix the compilation errors.

It’s too late to suggest this for the Orcas timeframe, really, which is a shame. I wish I’d thought of it earlier – although I suspect it’s been suggested before, so it probably wouldn’t make any odds. Anyway, what do you all think?

Is C# 3 too big to learn from scratch?

I’ve been looking at C# 3 in a fair amount of detail recently, and likewise going over the features of C# 2. (I hope to be able to be less coy about all this soon.) I’m beginning to think that while it’s all great for existing C# 1 and C# 2 developers, I feel sorry for someone wanting to learn C# 3 from scratch. It’s becoming quite a big language – and of course the framework is big and getting bigger (more on that in another post).

I’m generally a fan of small languages whose functionality is provided by libraries. This is still the case with C#, but the compiler is now being smarter in various ways to allow for a lot of the neat features in C# 3.

It’s often been said in the newsgroups (usually when someone has been moving from another language to C#) that C# itself only takes a few days to learn, but the framework takes a lot longer. The “framework takes longer” part is still true, but what about learning C# in a few days? I’m sure it depends on previous experience: someone coming from a functional background is likely to find the C# 2/3 changes to do with anonymous methods and lambda expressions significantly easier than someone moving from C or Java.

The interesting thing about the new features in C# 3 is that aside from query expressions, they’re really fairly easy to describe. There’s a big difference between reading the description of something and really “getting” the feature – and I’m not even talking about best practices and applicability, just sheer “understanding what’s going on when you see it in use”. Generics in C# 2 work almost the other way round – they’re quite complicated to describe in detail, but you can largely get on with using them and deal with details later. You end up with surprises such as the lack of type parameter covariance, but there will always be new things to learn with virtually any feature.

So, if you were learning C# from scratch, would you be daunted? As a rough indicator (and one which genuinely doesn’t have much to do with my writing at the moment) how big a book do you think it would take to learn C# (without Windows Forms, etc – just enough of the core libraries to understand iterator blocks, IDisposable etc)? I suspect it would be hard to do it any sort of justice in less than about 700 pages, which is a pretty off-putting size (at least for me).

I’m not sure whether it’s useful for an incoming professional developer to start off learning C# 1, then learn 2 when he’s comfortable with 1, and then 3 afterwards. It would be quite hard to be productive either working on new code or maintaining old code without being able to understand the syntax that colleagues are using.

Maybe there are enough existing C# developers and enthusiastic newbies who are willing to make such a significant commitment. Maybe I’m completely wrong about how hard it is – let’s face it, it’s always hard to gauge the difficulty involved in learning something after you already know it. I am concerned though. How does everyone else feel?

Smart enumerations

This afternoon, my team leader checked with me that there really was no way of telling when the current iteration of a foreach loop is the last one. I confirmed the situation, and immediately thought, “Well, why isn’t there a way?” I know that you can’t tell without peeking ahead, but surely there’s a simple way of doing that in a general purpose fashion…

About 15 minutes later, SmartEnumerable<T> was born, or at least something with the same functionality. It chains whatever enumeration you give it (in the same way as a lot of the LINQ calls do) but adds extra information about whether this is the first and/or last element in the enumeration, and the notional index of the element. An example will probably make this clearer. Here’s some example code:

using System;
using System.Collections.Generic;

using MiscUtil.Collections;

class Example
{
    static void Main(string[] args)
    {
        List<string> list = new List<string>();
        list.Add("a");
        list.Add("b");
        list.Add("c");
        list.Add("d");
        list.Add("e");
        
        foreach (SmartEnumerable<string>.Entry entry in
                 new SmartEnumerable<string>(list))
        {
            Console.WriteLine ("{0,-7} {1} ({2}) {3}",
                               entry.IsLast  ? "Last ->" : "",
                               entry.Value,
                               entry.Index,
                               entry.IsFirst ? "<- First" : "");
        }
    }
}

The output is as follows:

        a (0) <- First
        b (1)
        c (2)
        d (3)
Last -> e (4)

I’m pretty pleased with that – but annoyed with myself for not thinking of doing it before. I’m pretty shocked that I haven’t seen it elsewhere; the code behind it is really straightforward. Anyway, it’s now part of my Miscellaneous Utilities library, so feel free to have at it.

Of course, if any of you cunning readers have seen the same thing elsewhere, feel free to indicate just how ignorant I am…

Visual Studio 2008 (Orcas) Beta 2 Released

Somehow I’d managed to miss this announcement, as had some other people on the C# newsgroup, so I thought it would be worth posting here too. Visual Studio 2008 (Orcas) beta 2 has been released, and it’s now feature complete, apparently. Finally I get to use automatic properties! This time I’m going to take the risk of installing it onto my home laptop “properly” as opposed to using a Virtual PC – after backing up, of course.

I’ve been downloading via the MSDN subscriptions file transfer manager, and have been getting a great transfer rate – better than the download sites listed below, so if you’re an MSDN subscriber, I’d try that way first.

Related links:

Overloading == to return a non-boolean

Were you aware that you could overload == to return types other than boolean? I certainly wasn’t until I started reading through the lifted operators part of the C# 2 specification. It’s quite bizarre – here it is in action:

using System;

class Test
{
    public static string operator== (Test t1, Test t2)
    {
        return "Fish?";
    }
    
    public static string operator!= (Test t1, Test t2)
    {
        return "Not a fish?";
    }

    static void Main()
    {
        Test a = new Test();
        Test b = new Test();
        Console.WriteLine (a==b);
    }
}

That ends up printing “Fish?” to the console. Strange but true. When I asked about this on the newsgroup, one poster said that he did use this functionality for an ORM type of system – his == operator on two expressions would return another expression which represented the test for equality between the other two expressions. I can’t say I much like this idea, although I could see where he was coming from. (The C# spec does specifically discourage this sort of thing, which is at least a start).

So, dear readers, have any of you done this, and if so, why?

Why hasn’t Microsoft bought JetBrains yet?

For those of you who aren’t aware, JetBrains is the company behind IntelliJ IDEA, the Java IDE which I’ve heard amazing things about (I’ve tried it a couple of times but never got into it – I think I need an expert sitting beside me to point out the cool stuff as I go) and ReSharper, the incredibly useful (although somewhat resource hungry) add-in to Visual Studio that turns it into a respectable IDE.

What would happen if Microsoft bought JetBrains?

I’m sure that killing off the reportedly best Java IDE would do .NET no harm (even if it would be a fairly cruel thing to do, and still leave other perfectly good IDEs in the Java space), and surely they could use the ideas and experience of the company to improve Visual Studio significantly. I strongly suspect that tighter integration could make all the ReSharper goodness available with less performance overhead, and while it’s no doubt too late now, wouldn’t it have been wonderful for all of those features to be available in Orcas?

Anyway, just a thought.

Writing is hard work: what I’ve been up to recently…

Just a brief note to explain what I’ve been up to recently (and why I’ve got about four fun blog posts which I haven’t had time to write up yet). I’m wildly pleased to say that I’m currently writing a C# book for Manning (the same folks who published Groovy in Action).

I can’t give any more details at the moment, but hopefully as we get closer to publication I can give more details about not just the content but the writing process and anything interesting I’ve discovered while writing it. (Heck, there’s never enough room for everything you might want to include in a book – there’ll no doubt be plenty of left-overs to go round :)

Anyway, it’s hard work but incredibly rewarding. 28 hour days would be really welcome right now, admittedly, but the buzz is fantastic.

Non-volatile reads and Interlocked, and how they interact

Recently (May 2007) there’s been a debate on the microsoft.public.dotnet.framework newsgroup about the memory model, non-volatile variables, the Interlocked class, and how they all interact. Consider the following program:

Update! I screwed up the code, making all of the combinations possible accidentally. The new code is now in the post – the first few comments are based on the original code.

using System;
using System.Threading;

class Program
{
    int x;
    int y;
    
    void Run()
    {
        ThreadStart increment = IncrementVariables;
        new Thread(increment).Start();
        
        int a = x;
        int b = y;
        
        Console.WriteLine ("a={0}, b={1}", a, b);
    }
    
    void IncrementVariables()
    {
        Interlocked.Increment(ref y);
        Interlocked.Increment(ref x);
    }
    
    static void Main()
    {
        new Program().Run();
    }
}

The basic idea is that two variables are read at “roughly the same time” as they’re being incremented on another thread. The increments are each performed using Interlocked.Increment, which introduces a memory barrier – but the variables are read directly, and they’re not volatile. The question is what the program can legitimately print. I’ve put the reads into completely separate statements so that it’s crystal clear what order the IL will put them in. That unfortunately introduces two extra variables, a and b – think of them as “the value of x that I read” and “the value of y that I read” respectively.

Let’s consider the obvious possible values first:

a=0, b=0

This is very straightforward – the variables are read before the incrementing thread has got going.

a=0, b=1

This time, we read the value of x (and copied the value into a) before the incrementing thread did anything, then the values were incremented, and then we read the value of y.

a=1, b=1

This time, the incrementing thread does all its work before we get round to reading either of the variables.

So far, so good. The last possibility is the tricky one:

a=1, b=0

This would, on first sight, appear to be impossible. We increment y before we increment x, and we read x before we read y – don’t we? That should prevent this situation.

My contention is that there’s nothing to prevent the JIT from reordering the reads of x and y, effectively turning the middle bit of the code into this:

using System;
int b = y;
int a = x;
        
Console.WriteLine ("a={0}, b={1}", a, b);

Now that code could obviously show “a=1, b=0” by reading y before the increments took place and x afterwards.

The suggestion in the discussion was that the CLR had to honour the interlocked contract by effectively treating all access to these variables as volatile, because they’d been used elsewhere in an Interlocked call. I maintain that’s not only counter-intuitive, but would also require (in the case of public variables) all assemblies which might possibly use Interlocked with the variables to be scanned, which seems infeasible to me.

So, what do you all think? I’ll be mailing Joe Duffy to see if he can give his somewhat more expert opinion…

The CCR (Concurrency and Coordination Runtime) is out – apparently!

It seems that the CCR I’d been waiting for is now out, and has been for a while. Unfortunately, as far as I can tell it’s only available as part of the Microsoft Robotics Studio. It falls under the same licence as the Robotics Studio, and can be used in commercial apps (for a fee, I believe) – but why on earth isn’t it available as a standalone download with a simple free licence? Robotics Studio 1.0 is a nearly 50MB download, which is absurd if you’re only after the CCR.

It’s relatively little wonder that when searching for the CCR the most common hits are still for the Channel9 videos. I only found the download due to a helpful post in the newsgroups, and I already knew about the existence of the CCR beforehand. Anyone who is interesting in threading but happened to have missed the earlier videos would be forgiven for never hearing about the CCR.

Can you imagine if MS had bundled WCF or WPF as part of an “online games SDK” or something similar? It’s bizarre.

Come on, MS. Give the CCR a proper home, a proper download, and I suspect it’ll get some real momentum.

Visual Studio 2005 vs Eclipse – again, but this time with ReSharper…

A while ago, I wrote a blog entry about how Eclipse and Visual Studio 2005 stacked up against each other. Some of my quibbles with Visual Studio 2005 were due to my ignorance, but some were significant (from my point of view) missing features. Well, since then many people have recommended that I take a look at ReSharper. I requested a 30 day eval licence (as well as requesting a full free MVP licence by email – those people at JetBrains are very nice, thank you very much!) installed it, and after a quick look decided it would be worth revisiting my previous blog entry, to see just how many of the problems have been solved by ReSharper.

  • “Open Type” in Eclipse: yup, solved by ReSharper with “GoTo Type” – shortcut Ctrl-N
  • “Open Resource” in Eclipse: ditto, “GoTo File” – shortcut Ctrl-Shift-N
  • Overloads in Intellisense: amazingly, yes! I really hadn’t expected this one
  • “Organise imports” in Eclipse: yup, “Optimize Usings” – shortcut Ctrl-Alt-O
  • Unit test integration: see later
  • Refactoring: well, Extract Method still isn’t as smart as Eclipse, but there’s a lot more available than in vanilla VS2005. Only time will tell how much is really useful in practice
  • Navigational Hyperlinks: after my original blog entry was posted, readers explained the VS2005 equivalents. ReSharper provides some more navigation options though, including Ctrl+Click to navigate to the declaration, which will be handy for my muscle memory.
  • SourceSafe integration: not an issue for me now, as I use Subversion. AnkhSvn is pretty good, but I haven’t used it enough to really compare it with Subclipe (or Subversive)
  • Structural differences: no changes that I’m aware of
  • Compile on save: again, no change. After a couple of years with Eclipse, I’m really going to miss this.
  • Combined file and class browser: no changes that I’m aware of. I suspect the ReSharper guys could do this one. Maybe the next release? :)

I said I’d come back to unit testing. Previously, I’ve used TestDriven.NET but I’ve never been particularly happy with the lack of real IDE integration. It’s not too bad for NUnitGUI to come up in a new window, but it’s not as nice as the JUnit support in Eclipse. ReSharper has the same kind of integration as Eclipse, which is very welcome. I gather you need a different plug-in to run VSTS test cases, but that doesn’t bother me. There’s one thing which TestDriven.NET gives me which I’ll miss, however – NCover integration. Maybe I need to look into writing an NCover plugin for ReSharper… it would certainly come in handy. (I should point out at this stage that I don’t have the equivalent in Eclipse.) Of course, there’s nothing to stop me from using TestDriven.NET and ReSharper in the same, but they are both commercial products (for non-personal use at least). I’m not sure that NCover integration is enough to make it worth buying TestDriven.NET if you’ve decided to get ReSharper.

So, why wouldn’t you get ReSharper? Well, it hurts performance a bit. I can’t say by how much, or whether it depends on the size of solution. I’ve only tried it on a small solution, and only briefly at that. It’s definitely more sluggish than normal, but far from unusably so – the features above will more than make up for it, I’m sure, and those are only the ones I happened to highlight in my Eclipse comparison. There are many more features, and it looks pretty tweakable. (It does some extra highlighting in the editor that I’ve turned off, for example.)

In short, it looks fabulous so far. I’m very much looking forward to using it more, and hopefully persuading colleagues (and those who hold the purse strings, of course) of its value. As a mark of how much I expect to use it, I’m not changing the shortcut keys to match those in Eclipse. If I expected to just use it myself, I’d do so – but as I expect to see it in use on colleagues’ machines, I’d rather use the default shortcuts where possible.