Category Archives: Books

Recent activities

It’s been a little while since I’ve blogged, and quite a lot has been going on. In fact, there are a few things I’d have blogged about already if it weren’t for “things” getting in the way.

Rather than writing a whole series of very short blog posts, I thought I’d wrap them all up here…

C# in Depth: next MEAP drop available soon – Code Contracts

Thanks to everyone who gave feedback on my writing dilemma. For the moment, the plan is to have a whole chapter about Code Contracts, but not include a chapter about Parallel Extensions. My argument for making this decision is that Code Contracts really change the feel of the code, making it almost like a language feature – and its applicability is almost ubiquitous, unlike PFX.

I may write a PFX chapter as a separate download, but I’m sensitive to those who (like me) appreciate slim books. I don’t want to “bulk out” the book with extra topics.

The Code Contracts chapter is in the final stages before becoming available to MEAP subscribers. (It’s been “nearly ready” for a couple of weeks, but I’ve been on holiday, amongst other things.) After that, I’m going back to the existing chapters and revising them.

Talking in Dublin – C# 4 and Parallel Extensions

Last week I gave two talks in Dublin at Epicenter. One was on C# 4, and the other on Code Contracts and Parallel Extensions. Both are now available in a slightly odd form on the Talks page of the C# in Depth web site. I no longer write “formal” PowerPoint slides, so the downloads are for simple bullet points of text, along with silly hand-drawn slides. No code yet – I want to tidy it up a bit before including it.

Podcasting with The Connected Show

I recently recorded a podcast episode with The Connected Show. I’m “on” for the second 2/3 of the show – about an hour of me blathering on about the new features of C# 4. If you can understand generic variance just by listening to me talking about it, you’re a smart cookie ;)

(Oh, and if you like it, please express your amusement on Digg / DZone / Shout / Kicks.)

Finishing up with Functional Programming for the Real World

Well, this hasn’t been taking much of my time recently (I bowed out of all the indexing etc!) but Functional Programming for the Real World is nearly ready to go. Hard copy should be available in the next couple of months… it’ll be really nice to see how it fares. Much kudos to Tomas for all his hard work – I’ve really just been helping out a little.

Starting on Groovy in Action, 2nd edition

No sooner does one book finish than another one starts. The second edition of Groovy in Action is in the works, which should prove interesting. To be honest, I haven’t played with Groovy much since the first edition of the book was finished, so it’ll be interesting to see what’s happened to the language in the meantime. I’ll be applying the same sort of spit and polish that I did in the first edition, and asking appropriately ignorant questions of the other authors.

Tech Reviewing C# 4.0 in a Nutshell

I liked C# 3.0 in a Nutshell, and I feel honoured that Joe asked me to be a tech reviewer for the next edition, which promises to be even better. There’s not a lot more I can say about it at the moment, other than it’ll be out in 2010 – and I still feel that C# in Depth is a good companion book.

MoreLINQ now at 1.0 beta

A while ago I started the MoreLINQ project, and it gained some developers with more time than I’ve got available :) Basically the idea is to add some more useful LINQ extension methods to LINQ to Object. Thanks to Atif Aziz, the first beta version has been released. This doesn’t mean we’re “done” though – just that we think we’ve got something useful. Any suggestions for other operators would be welcome.

Manning Pop Quiz and discounts

While I’m plugging books etc, it’s worth mentioning the Manning Pop Quiz – multiple choice questions on a wide variety of topics. Fabulous prizes available, as well as one-day discounts:

  • Monday, Sept 7th: 50% of all print books (code: pop0907)
  • Monday, Sept 14: 50% off all ebooks  (code: pop0914)
  • Thursday, Sept 17: $25 for C# in Depth, 2nd Edition MEAP print version (code: pop0917) + C# Pop Quiz question
  • Monday, Sept 21: 50% off all books  (code: pop0921)
  • Thursday, Sept 24: $12 for C# in Depth, 2nd Edition MEAP ebook (code: pop0924) + another C# Pop Quiz question

Future speaking engagements

On September 16th I’m going to be speaking to Edge UG (formerly Vista Squad) in London about Code Contracts and Parallel Extensions. I’m already very much looking forward to the Stack Overflow DevDays London conference on October 28th, at which I’ll be talking about how humanity has screwed up computing.

Future potential blog posts

Some day I may get round to writing about:

  • Revisiting StaticRandom with ThreadLocal<T>
  • Volatile doesn’t mean what I thought it did

There’s a lot more writing than coding in that list… I’d like to spend some more time on MiniBench at some point, but you know what deadlines are like.

Anyway, that’s what I’ve been up to and what I’ll be doing for a little while…

The “dream book” for C# and .NET

This morning I showed my hand a little on Twitter. I’ve had a dream for a long time about the ultimate C# book. It’s a dream based on Effective Java, which is my favourite Java book, along with my experiences of writing C# in Depth.

Effective Java is written by Josh Bloch, who is an absolute giant in the Java world… and that’s both the problem and the opportunity. There’s no-one of quite the equivalent stature in the .NET world. Instead, there are many very smart people, a lot of whom blog and some of whom have their own books.

There are "best practices" books, of course: Microsoft’s own Framework Design Guidelines, and Bill Wagner’s Effective C# and More Effective C# being the most obvious examples. I’m in no way trying to knock these books, but I feel we could do even better. The Framework Design Guidelines (also available free to browse on MSDN) are really about how to create a good API – which is important, but not the be-all-and-end-all for many application developers who aren’t trying to ship a reusable class library and may well have different concerns. They want to know how to use the language most effectively, as well as the core types within the framework.

Bill’s books – and many others which cover the core framework, such as CLR via C#, Accelerated C# 2008 and C# 3.0 in a Nutshell – give plenty of advice, but often I’ve felt it’s a little one-sided. Each of these books is the work of a single person (or brothers in the case of Nutshell). Reading them, I’ve often wanted to give present a different point of view – or alternatively, to give a hearty "hear, hear." I believe that a book giving guidance would benefit greatly from being more of a conversation: where the authors all agree on something, that’s great; where they differ, it would be good to hear about the pros and cons of various approaches. The reader can then weigh up those factors as they apply to each particular real-world scenario.

Scope

So what would such a book contain? Opinions will vary of course, but I would like to see:

  • Effective ways of using language features such as lambda expressions, generic type inference (and indeed generics in general), optional parameters, named arguments and extension methods. Assume that the reader knows roughly what C# does, but give some extra details around things like iterator blocks and anonymous functions.
  • Guidance around class design (in a similar fashion to the FDG, but with more input from others in the community)
  • Core framework topics (again, assume the basics are understood):
    • Resource management (disposal etc)
    • Exceptions
    • Collections (including LINQ fundamentals)
    • Streams
    • Text (including internationalization)
    • Numeric types
    • Time-related APIs
    • Concurrency
    • Contracts
    • AppDomains
    • Security
    • Performance

I would prefer to avoid anything around the periphery of .NET (WPF, WinForms, ASP.NET, WCF) – I believe those are better handled in different topics.

Obstacles and format

There’s one big problem with this idea, but I think it may be a saving grace too. Many of the leading authors work for different publishers. Clearly no single publisher is going to attract all the best minds in the C# and .NET world. So how could this work in practice? Well…

Imagine a web site for the book, paid for jointly by all interested publishers. The web site would be the foremost delivery mechanism for the content, both to browse and probably to download in formats appropriate for offline reading (PDF etc). The content would be edited in a collaborative style obviously, but exactly how that would work is a detail to be thrashed out. If you’ve read the annotated C# or CLI specifications, they have about the right feel – opinions can be attributed in places, but not everything has a label.

Any contributing publisher could also take the material and publish it as hard copy if they so wished. Quite how this would work – with potentially multiple hard copy editions of the same content – would be interesting to see. There’s another reason against hard copy ever appearing though, which is that it would be immovable. I’d like to see this work evolve as new features appear and as more best practices are discovered. Publishers could monetize the web site via adverts, possibly according to how much they’re kicking into the site.

I don’t know how the authors would get paid, admittedly, and that’s another problem. Would this cannibalize the sales of the books listed earlier? It wouldn’t make them redundant – certainly not for the Nutshell type of book, which teaches the basics as well as giving guidance. It would hit Effective C# harder, I suspect – and I apologise to Bill Wagner in advance; if this ever takes off and it hurts his bottom line, I’m very sorry – I think it’s in a good cause though.

Dream Team

So who would contribute to this? Part of me would like to say "anyone and everyone" in a Wikipedia kind of approach – but I think that practically, it makes sense for industry experts to take their places. (A good feedback/comments mechanism for anyone to use would be crucial, however.) Here’s a list which isn’t meant to be exhaustive, but would make me happy – please don’t take offence if your name isn’t on here but should be, and I wouldn’t expect all of these people to be interested anyway.

  • Anders Hejlsberg
  • Eric Lippert
  • Mads Torgersen
  • Don Box
  • Brad Abrams
  • Krzysztof Cwalina
  • Joe Duffy
  • Vance Morrison
  • Rico Mariani
  • Erik Meijer
  • Don Symes
  • Wes Dyer
  • Jeff Richter
  • Joe and Ben Albahari
  • Andrew Troelsen
  • Bill Wagner
  • Trey Nash
  • Mark Michaelis
  • Jon Skeet (yeah, I want to contribute if I can)

I imagine "principal" authors for specific topics (e.g. Joe Duffy for concurrency) but with all the authors dropping in comments in other places too.

Dream or reality?

I have no idea whether this will ever happen or not. I’d dearly love it to, and I’ve spoken to a few people before today who’ve been encouraging about the idea. I haven’t been putting any work into getting it off the ground – don’t worry, it’s not been delaying the second edition of C# in Depth. One day though, one day…

Am I being hopelessly naïve to even consider such a venture? Is the scope too broad? Is the content valuable but not money-making? We’ll see.

Tricky decisions… Code Contracts and Parallel Extensions in C# in Depth 2nd edition

I’d like some feedback from readers, and I suspect my blog is the simplest way to get it.

I’m currently writing chapter 15 of C# in Depth, tentatively about Code Contracts and Parallel Extensions. The problem is that I’m 15 pages in, and I haven’t finished Code Contracts yet. I suspect that with a typesetter moving the listings around a little it can be shortened a little bit, but I’m still concerned. With the amount I’ve still got to write, Code Contracts is going to end up at 20 pages and I expect Parallel Extensions may be 25. That makes for a pretty monstrous chapter for non-language features.

I’d like to present a few options:

  1. Keep going as I am, and take the hit of having a big chapter. I’m not going into huge amounts of detail anyway, but the bigger point is to demonstrate how code isn’t what it used to be. We’re no longer writing a simple series of statements to be executed in order. Code Contracts changes this dramatically with the binary rewriter, and Parallel Extensions adjusts the parallelism, and ironically makes it easier to write asynchronous code as if it were executed sequentially.
  2. Try to whittle the material down to my original target of around 35 pages. This means it’ll be a really cursory glance at each of the technologies – I’m unsure of how useful it would be at all at that point.
  3. Don’t even claim to give enough information to really get people going with the new technologies, but possibly introduce extra ones as well, such as PostSharp. Build the theme of "you’re not writing C# 1 any more" in a stronger sense – zoom back to show the bigger picture while ignoring the details.
  4. Separate them into different chapters. At this point half the new chapters would be non-language features, which isn’t great for the focus of the book… but at least they’d be a more reasonable size.
  5. Ditch the chapters from the book completely, possibly writing them as separate chapters to be available as a mini-ebook companion to the book. (We could possibly include them in the ebook version.) This would make the second edition more focused again and possibly give me a bit more space when revising earlier chapters. However, it does mean there’d only be two full-size new chapters for the second edition. (There’ll be a new "wrapping up" chapter as well for a sense of closure, but I’m not generally counting that.)

Other suggestions are welcome, of course. I’m not going to claim that we’ll end up doing whatever is suggested here, but I’m sure that popular opinion will influence the final decision.

Thoughts?

Faking COM to fool the C# compiler

C# 4 has some great features to make programming against COM components bearable fun and exciting. In particular:

  • PIA linking allows you to embed just the relevant bits of the Primary Interop Assembly into your own assembly, so the PIA isn’t actually required at execution time
  • Named arguments and optional parameters make life much simpler for APIs like Office which are full of methods with gazillions of parameters
  • "ref" removal allows you to pass an argument by value even though the parameter is a by-reference parameter (COM only, folks – don’t worry!)
  • Dynamic typing allows you to remove a load of casts by converting every parameter and return type of "object" into "dynamic" (if you’re using PIA linking)

I’m currently writing about these features for the book (don’t forget to buy it cheap on Friday) but I’m not really a COM person. I want to be able to see these compiler features at work against a really simple type. Unfortunately, these really are COM-specific features… so we’re going to have to persuade COM that the type really is a COM type.

I got slightly stuck on this first, but thanks to the power of Stack Overflow, I now have a reasonably complete demo "fake" COM type. It doesn’t do a lot, and in particular it doesn’t have any events, but it’s enough to show the compiler features:

using System;
using System.Runtime.InteropServices;

// Required for linking into another assembly (C# 4)
[assembly:Guid("86ca55e4-9d4b-462b-8ec8-b62e993aeb64")]
[assembly:ImportedFromTypeLib("fake.tlb")]

namespace FakeCom
{
    [Guid("c3cb8098-0b8f-4a9a-9772-788d340d6ae0")]
    [ComImport, CoClass(typeof(FakeImpl))]
    public interface FakeComponent
    {
        object MakeMeDynamic(object arg);
        
        void Foo([Optional] ref int x,
                 [Optional] ref string y);
    }
 
    [Guid("734e6105-a20f-4748-a7de-2c83d7e91b04")]
    public class FakeImpl {}
}

We have an interface representing our COM type, and a class which the interface claims will implement it. Fortunately the compiler doesn’t actually check that, so we can get away with leaving it entirely unimplemented. It’s also worth noting that our optional parameters can be by-reference parameters (which you can’t normally do in C# 4) and we haven’t given them any default values (as those are ignored for COM anyway).

This is compiled just like any other assembly:

csc /target:library FakeCom.cs

Then we get to use it with a test program:

using FakeCom;

class Test
{
    static void Main()
    {
        // Yes, that is calling a "constructor" on an interface
        FakeComponent com = new FakeComponent();
        
        // The boring old fashioned way of calling a method
        int i = 0;
        string j = null;
        com.Foo(ref i, ref j);
        
        // Look ma, no ref!
        com.Foo(10, "Wow!");
        
        // Who cares about parameter ordering?
        com.Foo(y: "Not me", x: 0);

        // And the parameters are optional too
        com.Foo();
        
        // The line below only works when linked rather than
        // referenced, as otherwise you need a cast.
        // The compiler treats it as if it both takes and
        // returns a dynamic value.
        string value = com.MakeMeDynamic(10);
    }
}

This is compiled either in the old "deploy the PIA as well" way (after adding a cast in the last line):

csc /r:FakeCom.dll Test.cs

… or by linking the PIA instead:

csc /l:FakeCom.dll Test.cs

(The difference is just using /l instead of /r.)

When the test code is compiled as a reference, it decompiles in Reflector to this (I’ve added whitespace for clarity):

private static void Main()
{
    FakeComponent component = (FakeComponent) new FakeImpl();

    int x = 0;
    string y = null;
    component.Foo(ref x, ref y);

    int num2 = 10;
    string str3 = "Wow!";
    component.Foo(ref num2, ref str3);

    string str4 = "Not me";
    int num3 = 0;
    component.Foo(ref num3, ref str4);

    int num4 = 0;
    string str5 = null;
    component.Foo(ref num4, ref str5);

    string str2 = (string) component.MakeMeDynamic(10);
}

Note how the compiler has created local variables to pass by reference; any changes to the parameter are ignored when the method returns. (If you actually pass a variable by reference, the compiler won’t take that away, however.)

When the code is linked instead, the middle section is the same, but the construction and the line calling MakeMeDynamic are very different:

private static void Main()
{
    FakeComponent component = (FakeComponent) Activator.CreateInstance(Type.GetTypeFromCLSID
        (new Guid("734E6105-A20F-4748-A7DE-2C83D7E91B04")));

    // Middle bit as before

    if (<Main>o__SiteContainer6.<>p__Site7 == null)
    {
        <Main>o__SiteContainer6.<>p__Site7 = CallSite<Func<CallSite, object, string>>
            .Create(new CSharpConvertBinder
                       (typeof(string), 
                        CSharpConversionKind.ImplicitConversion, false));
    }
    string str2 = <Main>o__SiteContainer6.<>p__Site7.Target.Invoke
        (<Main>o__SiteContainer6.<>p__Site7, component.MakeMeDynamic(10));
}

The interface is embedded in the generated assembly, but with a slightly different set of attributes:

[ComImport, CompilerGenerated]
[Guid("C3CB8098-0B8F-4A9A-9772-788D340D6AE0"), TypeIdentifier]
public interface FakeComponent
{
    object MakeMeDynamic(object arg);
    void Foo([Optional] ref int x, [Optional] ref string y);
}

The class isn’t present at all.

I should point out that doing this has no practical benefit in real code – but the ability to mess around with a pseudo-COM type rather than having to find a real one with the exact members I want will make it a lot easier to try a few corner cases for the book.

So, not a terribly productive evening in terms of getting actual writing done, but interesting nonetheless…

Books going cheap

I’m delighted to say that Manning is having a promotional week, with a one-day discount voucher on each of the books I’ve been working on. Here’s the list of what’s going cheap when:

This seems an appropriate time to mention that the first new content from the 2nd edition of C# in Depth became available under MEAP over the weekend. I’m looking forward to getting feedback on it.

I’ll be tweeting the relevant code each morning as well. Go nuts :)

Dynamic type inference and surprising possibilities

There have been mutterings about the fact that I haven’t been blogging much recently. I’ve been getting down to serious work on the second edition of C# in Depth, and it’s taking a lot of my time. However, I thought I’d share a ghastly little example I’ve just come up with.

I’ve been having an email discussion with Sam Ng, Chris Burrows and Eric Lippert about how dynamic typing works. Sam mentioned that even for dynamically bound calls, type inference can fail at compile time. This can only happen for type parameters where none of the dynamic values contribute to the type inference. For example, this fails to compile:

static void Execute<T>(T item, int value) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute("test", guid);

Whatever the value of guid is at execution time, this can’t possibly manage to infer a valid type argument for T. The only type which can be inferred is string, and that’s not a value type. Fair enough… but what about this one?

static void Execute<T>(T first, T second, T third) where T : struct {}

dynamic guid = Guid.NewGuid();
Execute(10, 0, guid);
Execute(10, false, guid);
Execute("hello", "hello", guid);

I expected the first call to compile (but fail at execution time) and the second and third calls to fail at compile time. After all, T couldn’t be both an int and a bool could it? And then I remembered implicit typing… what if the vaue of guid isn’t actually a Guid, but some struct which has an implicit conversion from int, bool and string? In other words, what if the full code actually looked like this:

using System;

public struct Foo
{
    public static implicit operator Foo(int x)
    {
        return new Foo();
    }

    public static implicit operator Foo(bool x)
    {
        return new Foo();
    }

    public static implicit operator Foo(string x)
    {
        return new Foo();
    }
}

class Test
{
    static void Execute<T>(T first, T second, T third) where T : struct {}

    static void Main()
    {
        dynamic foo = new Foo();
        Execute(10, 0, foo);
        Execute(10, false, foo);
        Execute("hello", "hello", foo);
    }
}

Then T=Foo is a perfectly valid inference. So yes, it all compiles – and the C# binders even get it all right at execution time. So much for any intuition I might have about dynamic typing and inference…

No doubt I’ll have similar posts about new C# 4 features occasionally… but they’re more likely to be explanations of misunderstandings than deep insights into a correct view of the language. Those end up in the book instead :)

RFC: C# in Depth 2nd edition, proposed changes and additions

As I’ve mentioned in passing before now, I’ve started working on the 2nd edition of C# in Depth, to roughly coincide with the release of C# 4 and Visual Studio 2010. So far I’ve just been thinking about what should be added and what should be changed or removed. That’s what I’d like to share today, and I’m really eager for feedback. In particular, I’d like to think of three audiences:

  • Developers (possibly hobbyists) who have read one introductory C# book (Head First C#, Microsoft Visual C# 2008 Step by Step or something similar) and are now looking to follow it up. I’m thinking of people with less than a year of full-time development in C#, and not a lot of experience in similar languages. Is C# in Depth too “hard” for this audience? If so, should I try to make it more accessible for them, or is the material inherently too advanced? What might attract such a person to the book in terms of the table of contents?
  • Reasonably experienced developers who know C# 2 and 3 already, but haven’t read C# in Depth. They may have heard of it before (I’ve been very pleased in terms of word-of-mouth for the first edition, so thanks to everyone who’s contributed to that!) but not had much real reason to buy another book when they already have something like C# 3.0 in a Nutshell. Now that C# 4 is around, they may be interested in buying something which will explain the new features, and they might as well get a different book rather than just a fresh edition of something they already own.
  • Existing readers of C# in Depth. If you already own the first edition, what might persuade you to buy the second edition? How much “new” stuff is required? Would you rather get “more bang for the buck” from a significantly thicker book, or does the relative slimness of C# in Depth hold real appeal? (I know I’m a fan of slim books, but I don’t know quite how important it is. The second edition certainly will be thicker than the first as I’m unlikely to remove much, but I’m sure there’s scope for varying just how much we add.)

Here’s a draft table of contents – only down to headings within a chapter, but with some notes. Changes and notes are highlighted in blue.

Part 1: Preparing for the journey

Chapter 1: The changing face of C# development
  • Evolution in action: examples of code change
    I won’t be able to include the C# 4 features by just evolving the existing code… at least, not the big ones. Options:
    • Stick with the existing example for 1 to 2 and 2 to 3, but change to a different problem for 4
    • Try to work out a different example to show all of 1 to 2, 2 to 3 and 3 to 4
    • Stick with the existing example, but take it in a different direction for 4, rather than just rewriting the existing code
  • A brief history of C# (and related technologies)
    Update this with what’s been happening since 2008 and possibly slim down what’s already there
  • The .NET platform
    Add the new versions, including .NET 3.5 SP1, and introduce the DLR in the terminology part.
  • Fully functional code in snippet form
    Nothing new to add about Snippy, but there may be another tool to talk about as well.
  • Summary
Chapter 2: Core foundations: building on C# 1
  • Delegates
  • Type system characteristics
  • Value types and reference types
  • Beyond C# 1: new features on a solid base
    Introduce interface/delegate variance and the dynamic type.
  • Summary

Part 2: Solving the issues of C# 1

Chapter 3: Parameterized typing with generics

  • Why generics are necessary
  • Simple generics for everyday use
  • Beyond the basics
    Add a tip and example about using generic static methods in a non-generic type to take advantage of type inference when creating instances of generic types.
  • Advanced generics
  • Generic collection classes
    It would probably make sense to include HashSet<T> here, and possibly any new collection classes in .NET 4.0. Version warnings would be given, of course!
  • Limitations of generics in C# and other languages
    Refer to part 4 in terms of variance, and possibly trim down the coverage of alternative approaches.
    Add an example of Marc Gravell’s work for generic operators, or at least refer to it. Possibly mention the idea of
    static interfaces. Both give food for thought, but aren’t really part of C#.
  • Summary
Chapter 4: Saying nothing with nullable types
  • What do you do when you just don’t have a value?
  • System.Nullable<T> and System.Nullable
  • C# 2’s syntactic sugar for nullable types
  • Novel uses of nullable types
    Possibly include Marc Gravell’s teaser about calling all the Object methods on a null value.
  • Summary
    Make sure the summary is typeset as 4.5 instead of as part of 4.4. Oops!
Chapter 5: Fast-tracked delegates
  • Saying goodbye to awkward delegate syntax
  • Method group conversions
  • Covariance and contravariance
    Mention generic delegate variance in C# 4, referring to part 4.
  • Inline delegate actions with anonymous methods
  • Capturing variables in anonymous methods
  • Summary
Chapter 6: Implementing iterators the easy way
  • C# 1: the pain of handwritten iterators
  • C# 2: simple iterators with yield statements
  • Iteration in the real world
    Renamed section, but keep first example – possibly make it shorter.
    Add iteration over lines in a file.
    Add generation iterator, referring to LINQ.
  • Pseudo-synchronous code with the Concurrency and Coordination Runtime
    Fix sample code and explanation, with full article on the web
  • Summary
Chapter 7: Concluding C# 2: the final features
  • Partial types
  • Static classes
  • Separate getter/setter property access
  • Namespace aliases
  • Pragma directives
  • Fixed-size buffers in unsafe code

Part 3: New title TBD

Chapter 8: Cutting fluff with a smart compiler
  • Automatically implemented properties
  • Implicit typing of local variables
  • Simplified initialization
  • Implicitly typed arrays
  • Anonymous types
  • Summary
Chapter 9: Lambda expressions and expression trees
  • Lambda expressions as delegates
  • Simple examples using List<T> and events
  • Expression trees
    Expand for new expression support in .NET 4.0 (clearly labeled as such)
  • Changes to type inference and overload resolution
    Check if/how this has changed in C# 4.0 spec
  • Summary

 

Chapter 10: Extension methods
  • Life before extension methods
  • Extension method syntax
  • Extension methods in .NET 3.5
  • Usage idea and guidelines
    Update advice here and give some more examples.
  • Summary
Chapter 11: Query expressions and LINQ to Objects

Look at changing how the diagrams are formatted in this chapter.

  • Introducing LINQ
  • Simple beginnings: selecting elements
  • Filtering and ordering a sequence
  • Let clauses and transparent identifiers
  • Joins
  • Groupings and continuations
  • New section: extending LINQ to Objects
    • Advice for writing your own extension methods
    • Plug for MoreLINQ and other libraries
  • Summary
Chapter 12: LINQ beyond collections
  • LINQ to SQL
    Despite the rumours of its doom, I think this is the best place to start.
  • Translations using IQueryable and IQueryProvider
  • LINQ to DataSet
  • LINQ to XML
  • LINQ to Entities
    Again, not much depth – still a whirlwind tour.
  • Third-party LINQ (renamed from LINQ beyond .NET 3.5)
    • Update state of play with third-party providers
    • Example of PushLINQ – still using simple delegates, but against different interface
    • Remove Parallel Extensions – moved to part 4
  • Summary
Chapter 13: Removed (new chapter at end of part 4)

Part 4: C# 4 (full title TBD)

Chapter 13: Dynamic binding in a static language
  • Introduction to the DLR, IronPython, IronRuby
  • Calling dynamically – the dynamic keyword
  • Reacting dynamically – implementing IDynamicObject
  • Applications for dynamic code
  • Summary
Chapter 14: More minor tweaks
  • Named and optional arguments
  • Variance of interfaces and generic delegates
  • Simplifying COM interoperability
  • Summary
Chapter 15: Major new features of .NET 4.0

Even though these don’t affect the language directly, they change the “shape” of code, the way we think about problems. I’m hoping there will be more…

  • Bullet-proofing your code with Code Contracts
  • Simplifying concurrency with Parallel Extensions
  • F# (?)
  • Summary
Chapter 16: Whither now?
  • Educated guesses about C# 5
  • The Renaissance developer
  • Until we meet again
Appendix: LINQ standard query operators

So, how does that sound? What else needs changing from the first edition? Are there any particular sections which would benefit from a neat example? Have I missed anything important that you’d want to see in part 4?

Right now I’ve got a lot of flexibility – so please, I’d much rather hear ideas now than later.

 

Current book project: Real World Functional Programming

Now that it’s all official, I can reveal that since the tail end of last year I’ve been helping out with Real World Functional Programming by Tomáš Petříček (and me, ish).

I’m doing the same kinds of things I did with Groovy in Action, coming to the text with fresh eyes: I’m technically competent but don’t have any expertise in the subject matter. This means I can see it as a “real” reader would, challenge the assumptions, point out places which need more explanation etc. I’m also polishing the language a bit and adding a few informative notes. I suspect by the end there won’t be a single paragraph that I’ve written in its entirety, but there’ll also be few paragraphs which I haven’t touched at all.

It’s exciting to get a chance to work on the book, and I hope I’m improving it. Check it out, join the MEAP, and give us feedback :)

I have a couple of other books on the back-burner at the moment too; hopefully I’ll be able to give more details reasonably soon.

New version of Data Structures and Algorithms book now online

Some of you may remember an earlier post about a free Data Structures and Algorithms book which I’m occasionally helping out with in terms of editing.

I’ve just been told that a new version has recently been uploaded – please check it out. I have to admit that I haven’t actually done any work on this for a while… maybe I’ll get round to it in the New Year. Happy Christmas reading :)

The Snippy Reflector add-in

Those of you who’ve read C# in Depth will know about Snippy – a little tool which makes it easy to build complete programs from small snippets of code.

I’m delighted to say that reader Jason Haley has taken the source code for Snippy and built an add-in for Reflector. This will make it much simpler to answer questions like this one about struct initialization, where you really want to the IL generated for a snippet. Here’s a screenshot to show what it does:

This is really cool – if you want to dabble to see what the C# compiler does in particular situations, check it out. It comes as just the DLL, or a zipped version. Thanks for putting in all this work, Jason :)

Update: Jason now has his own (more detailed) blog entry too.