Category Archives: Wacky Ideas

Group pipelining returns: new and improved design

Last night’s blog post provoked a flurry of emails between myself and Marc Gravell. Looking back, trying to base the pipeline on a pseudo-asynchronous version of IEnumerable<T> was a mistake. We’ve now got a much more attractive interface to write extensions against:

 

public interface IDataProducer<T>
{
    event Action<T> DataProduced;
    event Action EndOfData;
}

Why is this so much better? A few reasons:

  1. Acting on all the data is much easier: just subscribe to the events. No need for a weird ForEach.
  2. It fits the normal processing model much better – we tend to want to process the data, not whether or not there are more elements on their way.
  3. It allows unsubscription when we’re not interested in any more data.
  4. It allows multiple subscribers.

The last point is very, very interesting. It means we can implement GroupWithPipeline (current new name – I suspect it won’t last forever though) to take multiple pipelines, producing a KeyValueTuple with, say, three values in, each of different types, still strongly typed. If we don’t care about the type safety, we can use “params…” to deal with as many pipelines as we want – the client then needs to cast, which isn’t ideal, but isn’t too bad.

As an idea of what I mean by this, consider this call to find the Max, Min and Average values, all without buffering:

 

var query = someSequenceOfOrders
                .GroupWithPipeline (entry => entry.Customer,
                                    seq => seq.Min(entry => entry.OrderSize),
                                    seq => seq.Average(entry => entry.OrderSize),
                                    seq => seq.Max(entry => entry.OrderSize));
                .Select(x => new { Customer = x.Key,
                                   Min = x.Value1,
                                   Avereage = x.Value2,
                                   Max = x.Value3 });
                                   
foreach (var result in query)
{
    Console.WriteLine (“Customer {0}: {1}/{2}/{3}”
                       result.Customer,
                       result.Min,
                       result.Average,
                       result.Max);
}

We specify three different pipelines, all of which will be applied to the same sequence of data. The fact that we’ve specified OrderSize three times is unfortunate – a new overload to transform the entries passed to the pipeline is probably in order – but it’s all doable.

This sort of “in pipeline” multiple aggregation is very, very cool IMO. It’s turned the whole idea from “interesting” to “useful enough to get into MiscUtil”.

I haven’t actually written Min, Max or Average yet – although Marc has, I believe. (We’re collaborating and sharing source, but not working off a common source control system yet. It’s all a bit ad hoc.) What I know he’s done which is possibly even more useful than all of this to start with is used expression trees to implement generic maths. This is only execution time checked, which is unfortunate, but I don’t believe that will be a problem in real life.

The upshot is that the above code will work with any type with appropriate operators defined. No need for loads of overloads for decimal, long, int, float, double etc – it will just work. If you’re worried about performance, you can relax – it performs very, very well, which was a bit of a surprise to both of us.

More of that in another post though… I wanted to share the new design because it’s so much nicer than the deeply complicated stuff I was working with yesterday.

Don’t call us, we’ll call you – push enumerators

Update: I’ve got a new and simpler design now. I’m leaving this in for historical interest, but please see the entry about the new design for more recent information.

This post is going to be hard to write, simply because I can’t remember ever writing quite such bizarre code before. When I find something difficult to keep in my own head, explaining it to others is somewhat daunting, especially when blogging is so much more restrictive than face-to-face discussion. Oh, and you’ll find it hard if you’re not familiar with lambda expression syntax (x => x.Foo etc). Just thought I’d warn you.

It’s possibly easiest to explain with an example. It’s one I’m hoping to use in a magazine article – but I certainly won’t be trying to explain this in the article. Imagine you’ve got a lot of log entries on disk – by which I mean hundreds of millions. You certainly don’t want all of that in memory. However, each of the log entries contains a customer ID, and you want to find out for each customer how many entries there are. Here’s a LINQ query which would work but be horribly inefficient, loading everything into memory:

var query = from entry in logEntryReader
            group entry by entry.Customer into entriesByCustomer
            let count = entriesByCustomer.Count()
            orderby count descending
            select new { Customer = entriesByCustomer.Key, Count = count };

Now, it’s easy to improve this somewhat just by changing the “group entry by” to “group 1 by” – that way the entries themselves are thrown away. However, you’ve still got some memory per entry – a huge great enumeration of 1s to count after grouping.

The problem is that you can’t tell “group … by” how to aggregate the sequence associated with each key. This isn’t just because there’s no syntax to express it – it’s to do with the nature of IEnumerable itself. You see, the “pull” nature of IEnumerable is a problem. While a thread is waiting for more data, it will just block. Normally, an aggregator (like Count) just picks data off a sequence until it reaches the end, then returns the result. How can that work when there are multiple sequences involved (one for each customer)?

There are three answers to this:

1) Write your own group-and-count method. This is pretty straightforward, and potentially useful in many situations. It’s also fairly easy to understand. You just iterate through a sequence, and keep a dictionary of key to int, increasing each key’s count as you see elements. This is the pragmatic solution when faced with a specific problem – but it feels like there should be something better, something that lets us group and then specify the processing in terms of standard query operators.

2) Create a new thread and producer/consumer style IEnumerable for each key. Clearly this doesn’t scale.

3) Invert control of enumerations: put the producer in the driving seat instead of the consumer. This is the approach we’re talking about for the rest of the post.

A word on the term “asynchronous”

I don’t know whether my approach could truly be called asynchronous. What I haven’t done is make any of the code thread-aware at all, or even thread-safe. All the processing of multiple sequences happens in a single thread. I also don’t have the full BeginXXX, EndXXX using IAsyncResult pattern. I started down that line, but it ended up being a lot more complicated than what I’ve got now.

I’m pretty sure that what I’ve been writing is along the lines of CSPs (Communicating Sequential Processes) but I wouldn’t in any way claim that it’s a CSP framework, either.

However, you may find that it helps to think about asynchronous APIs like Stream.BeginRead when looking at the rest of the code. In particular, reading a stream asynchronously has the same “say I’m interested in data, react to data, request some more” pattern.

Keeping the Count aggregator in mind, what we want to do is maintain a private count, and request some data. When we get called back to say there is more data, we increment our count (ignoring the data) and request some more. When we are told that there’s no more data, we can return the count.

With that said, here’s the interface for what I’ve called IPushEnumerator. The name is open for change – I’ve been through a few options, and I’m still not comfortable with it. Please feel free to suggest another one! Note that there isn’t an IPushEnumerable – again, I started off with one, but found it didn’t make sense. Maybe someone smarter than me will come up with a way of it fitting.

IPushEnumerator

/// <summary>
/// An enumerator which works in an inverse manner – the consumer requests
/// to be notified when a value is available, and then the enumerator
/// will call back to the consumer when the producer makes some data
/// available or indicates that all the data has been produced.
/// </summary>
public interface IPushEnumerator<T>
{
    /// <summary>
    /// Fetch the current value. Not valid until
    /// a callback with a True argument has occurred,
    /// or after a callback 
    /// </summary>
    T Current { get; }

    /// <summary>
    /// Requests notification (via the specified callback) when more
    /// data is available or when the end of the data has been reached.
    /// The argument passed to the callback indicates which of these
    /// conditions has been met – logically the result of MoveNext
    /// on a normal IEnumerator.
    /// </summary>
    void BeginMoveNext(Action<bool> callback);
}

That bit is relatively easy to understand. I can ask to be called back when there’s data, and typically I’ll fetch the data within the callback and ask for more.

So far, so good. But what’s going to create these in the first place? How do we interface with LINQ? Time for an extension method.

Enumerable.GroupWithPush

I wanted to create an extension to IEnumerable<T> which had a “LINQ feel” to it. It should be quite like GroupBy, but then allow the processing of the subsequences to be expressed in a LINQ-like way. (Actual C# query expressions aren’t terribly useful in practice because there isn’t specific syntax for the kind of operators which turn out to be useful with this approach.) We’ll want to have type parameters for the original sequence (TElement), the key used for grouping (TKey) and the results of whatever processing is performed on each sequence (TResult).

So, the first parameter of our extension method is going to be an IEnumerable<TElement>. We’ll use a Func<TElement,TKey> to map source elements to keys. We could optionally allow an IEqualityComparer<TKey> too – but I’m certainly not planning on supporting as many overloads as Enumerable.GroupBy does. The final parameter, however, needs to be something to process the subsequence. The first thought would be Func<IPushEnumerator<TElement>,TResult> – until you start trying to implement the extension method or indeed the delegate doing the processing.

You see, given an IPushEnumerator<TElement> you really don’t want to return a result. Not just yet. After all, you don’t have the data yet, just a way of being given the data. What you want to return is the means of the caller obtaining the result after all the data has been provided. This is where we need to introduce a Future<T>.

Future<T>

If you don’t know about the idea of a future, it’s basically an IOU for a result. In proper threading libraries, futures allow the user to find out whether a computation has completed or not, wait for the result etc. My implementation of Future<T> is not that smart. It’s not smart at all. Here it is:

/// <summary>
/// Poor-man’s version of a Future. This wraps a result which *will* be
/// available in the future. It’s up to the caller/provider to make sure
/// that the value has been specified by the time it’s requested.
/// </summary>
public class Future<T>
{
    T value;
    bool valueSet = false;

    public T Value 
    {
        get
        {
            if (!valueSet)
            {
                throw new InvalidOperationException(“No value has been set yet”);
            }
            return value;
        }
        set
        {
            valueSet = true;
            this.value = value;
        }
    }
}

With this in place, we can reveal the actual signature of GroupWithPush:

public static IEnumerable<KeyValuePair<TKey, TResult>> GroupWithPush<TElement, TKey, TResult>
    (this IEnumerable<TElement> source,
     Func<TElement, TKey> mapping,
     Func<IPushEnumerator<TElement>, Future<TResult>> pipeline)

I shall leave you to mull over that – I don’t know about you, but signatures of generic methods always take me a little while to decode.

The plan is to then implement extension methods on IPushEnumerator<T> so that we can write code like this:

var query = logEntryReader.GroupWithCount(entry => entry.Customer,
                                          sequence => sequence.Count());

foreach (var result in query)
{
    Console.WriteLine (“Customer {0}: {1} entries”,
                       result.Key,
                       result.Value);
}

Okay, so how do we implement these operators? Let’s give an example – Count being pretty a simple case.

Implementing Count()

Let’s start off by looking at a possible Count implementation for a normal sequence, to act as a sort of model for the implementation in the weird and wacky land of futures and push enumerators:

public static int Count<T>(IEnumerable<T> source)
{
    int count = 0;
    foreach (T item in source)
    {
        count++;
    }
    return count;
}

Now, we’ve got two problems. Firstly, we’re not going to return the count – we’re going to return a Future. Secondly, we certainly can’t use foreach on an IPushEnumerator<T> – the whole point is to avoid blocking while we wait for data. However, the concept of “for each element in a sequence” is useful – so let’s see whether we can do something similar with another extension method, then come back and use it in Count.

Implementing ForEach()

Warning: this code hurts my head, and I wrote it. Even the idea of it hurts my head a bit. The plan is to implement a ForEach method which takes two delegates – one which is called for each item in the enumerator, and one which is called after all the data has been processed. It will return without blocking, but it will call BeginMoveNext first, using a delegate of its own. That delegate will be called when data is provided, and it will in turn call the delegates passed in as parameters, before calling BeginMoveNext again, etc.

Ready?

public static void ForEach<T>(this IPushEnumerator<T> source, 
                              Action<T> iteration,
                              Action completion)
{
    Action<bool> moveNextCallback = null;
            
    moveNextCallback = dataAvailable =>
         {
             if (dataAvailable)
             {
                 iteration(source.Current);
                 source.BeginMoveNext(moveNextCallback);
             }
             else
             {
                 completion();
             }
         };

    source.BeginMoveNext(moveNextCallback);
}

What I find particularly disturbing is that moveNextCallback is self-referential – it calls BeginMoveNext passing itself a the parameter. (Interestingly, you still need to assign it to null first, otherwise the compiler complains that it might be used without being assigned. I seem to remember reading a blog post about this before now, and thinking that I’d never ever run into such a situation. Hmm.)

Nasty as ForEach is in terms of implementation, it’s not too bad to use.

Implementing Count() – the actual code

The translation of the original Count is now relatively straightforward. We prepare the Future wrapper for the result, and indicate that we want to iterate through all the entries, counting them and then setting the result value when we’ve finished (which will be long after the method first returns, don’t forget).

public static Future<int> Count<T>(this IPushEnumerator<T> source)
{
    Future<int> ret = new Future<int>();
    int count = 0;

    source.ForEach(t => count++, 
                   () => ret.Value = count);
    return ret;
}

We’re nearly there now. All we need to do is complete the original GroupWithPush method:

Implementing GroupWithPush

There are three phases to GroupWithPush, as mentioned before: pushing the data to the consumers (creating those consumers as required based on the keys we see); telling all the consumers that we’ve finished; retrieving the results. It’s probably easiest just to show the code – it’s actually not too hard to understand.

public static IEnumerable<KeyValuePair<TKey, TResult>> GroupWithPush<TElement, TKey, TResult>
    (this IEnumerable<TElement> source,
     Func<TElement, TKey> mapping,
     Func<IPushEnumerator<TElement>, Future<TResult>> pipeline)
{
    var enumerators = new Dictionary<TKey, SingleSlotPushEnumerator<TElement>>();
    var results = new Dictionary<TKey, Future<TResult>>();
    // Group the data, pushing it to the enumerators at the same time.
    foreach (TElement element in source)
    {
        TKey key = mapping(element);
        SingleSlotPushEnumerator<TElement> push;
        if (!enumerators.TryGetValue(key, out push))
        {
            push = new SingleSlotPushEnumerator<TElement>();
            results[key] = pipeline(push);
            enumerators[key] = push;
        }
        push.Push(element);
    }
    // Indicate to all the enumerators that we’ve finished
    foreach (SingleSlotPushEnumerator<TElement> push in enumerators.Values)
    {
        push.End();
    }
    // Collect the results, converting Future<T> into T for each one.
    foreach (var result in results)
    {
        yield return new KeyValuePair<TKey, TResult>(result.Key, result.Value.Value);
    }
}

I haven’t introduced SingleSlotPushEnumerator before, but as you can imagine, it’s an implementation of IPushEnumerator, with Push() and End() methods to provide data or indicate the end of the data stream. It’s not terribly interesting to see, in my view.

Conclusion

So, that’s what I’ve been looking at and thinking about for the last few evenings. I’ve implemented quite a few of the standard query operators, although not all of them are worth doing. I’m not currently viewing this as anything more than an interesting exercise, partly in terms of seeing how far I can push the language, but if anyone thinks it’s worth pursuing further (e.g. as a complete implementation as far as sensibly possible, either in MiscUtil or on SourceForge) I’d be very happy to hear your ideas. Frankly, I’d be glad and slightly surprised just to find out that anyone made it this far.

Oh, exercise for the reader – draw out a sequence diagram of how all this behaves :)

Wacky Ideas 3: Object life-cycle support

No, don’t leave yet! This isn’t another article about non-deterministic finalization, RAII etc. That’s what we almost always think of when someone mentions the object life-cycle, but I’m actually interested in the other end of the cycle – the “near birth” end.

We often take it as read that when an object’s constructor has completed successfully, the object should be ready to use. However, frameworks and technologies like Spring and XAML often make it easier to create an object and then populate it with dependencies, configuration etc. Yes, in some cases it’s more appropriate to have a separate configuration class which is used for nothing but a bunch of properties, and then the configuration can be passed into the “real” constructor in one go, with none of the readability problems of constructors taking loads of parameters. It’s all a bit unsatisfactory though.

What we most naturally want is to say, “Create me an empty X. Now configure it. Now use it.” (Okay, and as an obligatory mention, potentially “Now make it clean up after itself.”)

While configuring the object, we don’t want to call any of the “real” methods which are likely to want to do things. We may want to be able to fetch some of the configuration back again, e.g. so that some values can be relative to others easily, but we don’t want the main business to take place. Likewise, when we’ve finished configuring the object, we generally want to validate the configuration, and after that we don’t want anyone to be able to change the configuration. Sometimes there’s even a third phase, where we’ve cleaned up and want to still be able to get some calculated results (the byte array backing a MemoryStream, for instance) but not call any of the “main” methods any more.

I’d really like some platform support for this. None of it’s actually that hard to do – just a case of keeping track of which phase you’re in, and then adding a check to the start of each method. Wouldn’t it be nicer to have it available as attributes though? Specify the “default phase” for any undecorated members, and specify which phases are valid for other members – so configuration setters would only be valid in the configuration phase, for instance. Another attribute could dictate the phase transition – so the ValidateAndInitialize method (or whatever you’d call it) would have an attribute stating that on successful completion (no exceptions thrown) the phase would move from “configure” to “use”.

Here’s a short code sample. The names and uses of the attributes could no doubt be improved, and if there were only a few phases which were actually useful, they could be named in an enum instead, which would be neat.

[Phased(defaultRequirement=2, initial=1)]
class Sample
{
    IAuthenticator authenticator;
    
    public IAuthenticator Authenticator
    {
        [Phase(1)]
        [Phase(2)]
        get
        {
            return authenticator;
        }
        [Phase(1)]
        set
        {
            authenticator = value;
        }
    }
    
    [Phase(1)]
    [PhaseTransition(2)]
    public void ValidateAndInitialize()
    {
        if (authenticator==null)
        {
            throw new InvalidConfigurationException("I need an authenticator");
        }
    }
    
    public void DoSomething()
    {
        // Use authenticator, assuming it's valid
    }
    
    public void DoSomethingElse()
    {
        // Use authenticator, assuming it's valid
    }
}

Hopefully it’s obvious what you could and couldn’t do at what point.

This looks to me like a clear example of where AOP should get involved. I believe that Anders isn’t particularly keen on it, and when abused it’s clearly nightmarish – but for certain comment things, it just makes life easier. The declarative nature of the above is simpler to read (IMO – particularly if names were used instead of numbers) than manually checking the state at the start of each method. I don’t know if any AOP support is on the slate for Java 7 – I believe things have been made easier for AOP frameworks by Java 6, although I doubt that any target just Java 6 yet. We shall have to see.

One interesting question is whether you’d unit test that all the attributes were there appropriately. I guess it depends on the nature of the project, and just how thoroughly you want to unit test. It wouldn’t add any coverage, and would be hard to exhaustively test in real life, but the tests would be proving something…

Wacky Ideas 2: Class interfaces

(Disclaimer: I’m 99% sure I’ve heard someone smarter than me talking about this before, so it’s definitely not original. I thought it worth pursuing though.)

One of the things I love about Java and C# over C/C++ is the lack of .h files. Getting everything in the right place, only doing the right things in the right files, and coping with bits being included twice etc is a complete pain, particularly if you only do it every so often rather than it being part of your everyday life.

Unfortunately, as I’ve become more interface-based, I’ve often found myself doing effectively the same thing. Java and C# make life a lot easier than C in this respect, of course, but it still means duplicating the method signatures etc. Often there’s only one implementation of the interface – or at least one initial implementation – but separating it out as an interface gives a warm fuzzy feeling and makes stubbing/mocking easier for testing.

So, the basic idea here is to extract an interface from a class definition. In the most basic form:

class interface Sample
{
    public void ThisIsPartOfTheInterface()
    {
    }
    
    public void SoIsThis()
    {
    }
    
    protected void NotIncluded()
    {
    }
    
    private void AlsoNotIncluded()
    {
    }
}

So the interface Sample just has ThisIsPartOfTheInterface and SoIsThis even though the class Sample has the extra methods.

Now, I can see a lot of cases where you would only want part of the public API of the class to contribute to the interface – particularly if you’ve got properties etc which are meant to be used from an Inversion of Control framework. This could either be done with cunning keyword use, or (to make fewer syntax changes) a new attribute could be introduced which could decorate each member you wanted to exclude (or possibly include, if you could make the default “exclude” with a class-level attribute).

So far, so good – but now we’ve got two types with the same name. What happens when the compiler runs across one of the types? Well, here’s the list of uses I can think of, and what they should do:

  • Variable declaration: Use the interface
  • Construction: Sse the class
  • Array declaration/construction: Use the interface (I think)
  • typeof: Tricky. Not sure. (Note that in Java, we could use Sample.class and Sample.interface to differentiate.)
  • Type derivation: Not sure. Possibly make it explicit: “DerivedSample : class Sample” or “DerivedSample : interface Sample
  • Generics: I think this would depend on the earlier “not sure” answers, and would almost certainly be complicated

As an example, the line of code “Sample x = new Sample();” would declare a variable x of the interface type, but create an instance of the concrete class to be its initial value.

So, it’s not exactly straightforward. It would also violate .NET naming conventions. Would it be worth it, over just using an “Extract Interface” refactoring? My gut feeling is that there’s something genuinely useful in here, but the complications do seem to overwhelm the advantages.

Perhaps the answer is not to try to have two types with the same name (which is where the complications arise) but to be able to explicitly say “I’m declaring interface ISample and implementing it in Sample” both within the same file. At that point it may be unintuitive to get to the declaration of ISample, and seeing just the members of it isn’t straightforward either.

Is this a case where repeating yourself is fundamentally necessary, or is there yet another way of getting round things that I’m missing?

Wacky Ideas 1: Inheritance is dead, long live mix-ins!

(Warning: I’ve just looked up “mix-in” on Wikipedia and their definition isn’t quite what I’m used to. Apologies if I’m using the wrong terminology. What I think of as a mix-in is a proxy object which is used to do a lot of the work the class doing the mixing says it does, but preferably with language/platform support.)

I’ve blogged before about my mixed feelings about inheritance. It’s very useful at times, but the penalty is usually very high, and if you’re going to write a class to be derived from, you need to think (and document) about an awful lot of things. So, how about this: we kill of inheritance, but make mix-ins really easy to write. Oh, and I’ll assume good support for closures as well, as a lot can be done with the Strategy Pattern via closures which would otherwise often be done with inheritance.

So, let’s make up some syntax, and start off with an example from the newsgroups. The poster wanted to derive from Dictionary<K,V> and override the Add method to do something else as well as the normal behaviour. Unfortunately, the Add method isn’t virtual. One poster suggested hiding the Add method with a new one – a solution I don’t like, because it’s so easy for someone to break encapsulation by using an instance as a plain Dictionary<K,V>. I suggested re-implementing IDictionary<K,V>, having a private instance of Dictionary<K,V> and making each method just call the corresponding one on that, doing extra work where necessary.

Unfortunately, that’s a bit ugly, and for interfaces with lots of methods it can get terribly tedious. Instead, suppose we could do this:

using System.Collections.Generic;

class FunkyDictionary<K,V> : IDictionary<K,V>
{
IDictionary<K,V> proxyDictionary proxies IDictionary<K,V>;

void IDictionary<K,V>.Add(K key, V value)
{
// Do some other work here

proxyDictionary.Add(key, value);

// And possibly some other work here too
}
}

Now, that’s a bit simpler. To be honest, that kind of thing would cover most of what I use inheritance for. (Memo to self: write a tool which actually finds out how often I do use inheritance, and where, rather than relying on memory and gut feelings.) The equivalent of having an abstract base class and overriding a single method would be fine, with a bit of care. The abstract class could still exist and claim to implement the interface – you just implement the “missing” method in the class which proxies all the rest of the calls.

The reason it’s important to have closures (or at least delegates with strong language support) is that sometimes you want a base class to be able to very deliberately call into the derived class, just for a few things. For those situations, delegates can be provided. It achieves the same kind of specialization as inheritance, but it makes it much clearer (in both the base class and the “derived” one) where the interactions are.

One point of interest is that without any inheritance, we lose the benefits of a single inheritance tree – unless object becomes a general “any reference”, which is mostly what it’s used for. Of course, there are a few methods on System.Object itself which we’d lose. Let’s look at them. (Java equivalents aren’t specified, but Java-only ones are):

  • ToString: Not often terribly useful unless it’s been overridden anyway
  • GetHashCode/Equals: Over time I’ve been considering that it may have been a mistake to make these generally available anyway; when they’re not overridden they tend to behave very differently to when they are. Wrapping the existing behaviour wouldn’t be too hard when wanted, but otherwise make people use IEquatable<T> or the like
  • GetType: This is trickier. It’s clearly a pretty fundamental kind of call which the CLR will have to deal with itself – would making it a static (natively implemented) method which took an object argument be much worse?
  • MemberwiseClone: This feels “systemy” in the same way as GetType. Could something be done such that you could only pass in “this“? Not a terribly easy one, unless I’m missing something.
  • finalize (Java): This could easily be handled in a different manner, similar to how .NET does.
  • wait/notify/notifyAll (Java): These should never have been methods on java.lang.Object in the first place. .NET is a bit better with the static methods on the Monitor class, but we should have specific classes to use for synchronization. Anyway, that’s a matter I’ve ranted about elsewhere.

 

What are the performance penalties of all of this? No idea. Because we’d be using interfaces instead of concrete classes a lot of the time, there’d still be member lookup even if there aren’t any virtual methods within the classes themselves. Somehow I don’t think that performance will be the reason this idea is viewed as a non-starter!

Of course, all of this mix-in business relies on having an interface for everything you want to use polymorphically. That can be a bit of a pain, and it’s the subject of the next article in this series.

Wacky Ideas – Introduction

I’ve been having a few wacky ideas recently, and I think it’s time to put them to virtual paper. They’re mostly around how we think about OO, and how future languages and platforms could do things. I very much doubt that any of them are new. I suspect they’ve been mulled over by people who really know how to think about these things, and then write papers about them. Probably using TeX. I’m not going to that much effort, so there will be several things I haven’t thought through at all. I won’t go so far as to say that’s your job, but knowing my readership you’re likely to come up with loads of things I’d never considered anyway.

Most are likely to be phrased in C#/.NET terms, but they’re likely to apply to Java anyway. Some may have a few better fits in one language than another – I’ll point them out when I think of them.

I don’t necessarily think these are good ideas. Some are probably stinkers. Some may well be useful. Some may even occur one day. Some are bound to exist already in languages I don’t know, of which there are many. Almost all of them are likely to introduce new syntax (or take some away) which makes them non-starters for many scenarios. Don’t take it all too seriously, but I hope you have fun.