Critical dead-weight

We’re all familiar with the idea of a technology achieving critical mass: having enough users (subscribers, customers, whatever the appropriate metric might be) to keep it alive and useful. This morning I was considering the idea of critical dead-weight: having enough users etc to keep the technology alive long past its natural lifetime.

Examples of technologies we might like to kill

  • SMTP: I suspect that completely preventing spam and other abuses (while maintaining a lot of the benefits we currently enjoy) would be difficult even with a modern protocol design, but the considerations on a messaging system created today would be completely different to those used to conceive SMTP.
  • NNTP: I still prefer using a dedicated newsreader for newsgroups instead of the kind of web forum which seems fairly pervasive these days. The simple support for offline reading and deferred posting, the natural threading (including in the face of missing articles) and the nature of purpose-built applications all appeal to me. However, like SMTP there are various concerns which just weren’t considered in the original design.
  • HTML: In some ways HTML itself isn’t the biggest problem I see here (although making the markup language itself stricter to start with might have helped) – it’s the fact that browsers have always supported broken HTML. There are numbers which are often produced during discussions of browser (and particularly renderer) implementations to say just what proportion of browser code is dedicated to displaying invalid HTML in a reasonably pleasant way. I don’t recall the exact figures, and I suspect many are pulled out of thin air, but it’s a problem nonetheless. Folks who know more about the world of content itself are in a better position to comment on the core usefulness of HTML.
  • HTTP: Okay, this one is slightly tenuous. There are definitely bits of HTTP which could have been more sensibly defined (I seem to recall that the character encoding used when handling encoded bits of URL such as %2F etc is poorly specified, for example) but there are bigger issues at stake. The main thing is to consider whether the “single request, single response” model is really the most appropriate one for the modern web. It makes life more scalable in many ways, but even so it has various downsides. 
  • IPv4: This is one area where we already have a successor: IPv6. However, we’ve seen that the transition to IPv6 is happening at a snail’s pace, and there is already much criticism of this new standard, even before most of us have got there. I don’t profess to understand the details of the debate, but I can see why there is concern about the speed of change.
  • Old APIs (in Windows, Java etc): I personally feel that many vocal critics of Windows don’t take the time to appreciate how hard it is to maintain backwards compatibility to the level that Microsoft manages. This is not to say they do a perfect job, but I understand it’s a pretty nightmarish task to design a new OS when you’re so constrained by history. (I’ve read rumours that Windows 7 will tackle backward compatibility in a very different way, meaning that to run fully natively vendors will have to recompile. I guess this is similar to how Apple managed OS X, but I don’t know any details or even whether the rumours are accurate.) Similarly Java has hundreds or thousands of deprecated methods now – and .NET has plenty, too. At least there is a road towards planned obsolescence on both platforms, but it takes a long time to reach fruition. (How many of the deprecated Java APIs have actually been removed?)
  • Crufty bits of programming languages: Language designers aren’t perfect. It would be crazy to expect them to be able to look back 5, 10, 15 years later and say “Yes, I wouldn’t change anything in the original design.” I’ve written before about my own view of C# language design mistakes, and there are plenty in Java as well (more, in fact). Some of these can be deprecated by IDEs – for instance, Eclipse can warn you if you try to use a static member through a variable, as if it were an instance variable. However, it’s still not as nice as having a clean language to work with. Again, backward compatibility is a pain…

Where is the dead-weight?

There are a two slightly different kinds of dead-weight here. The first is a communications issue: if two people currently use a certain protocol to communicate (e.g. SMTP) then in most cases both parties need to change to a particular new option before all or sometimes any of its advantages can be seen.

The other issue can be broadly termed backward compatibility. I see this as slightly different to the communications issue, even though that can cover some of the same bases (where one protocol is backwardly compatible with another, to some extent). The core problem here is “We’ve got a lot of stuff for the old version” where stuff can be code, content, even hardware. The cost of losing all of that existing stuff is usually greater (at least in the short to medium term) than the benefits of whatever new model is being proposed.

What can be done?

This is where I start running out of answers fast. Obviously having a transition plan is important – IPv6 is an example where it at least appears that the designers have thought about how to interoperate with IPv4 networks. However, it’s another example of the potential cost of doing so – just how much are you willing to compromise an ideal design for the sake of a simplified transition? Another example would be generics: Java generics were designed to allow the existing collection classes to have generics retrofitted without backward compatibility issues, and without requiring a transition to new actual classes. The .NET approach was very different – ArrayList and List<T> certainly aren’t interchangable, for example – but this allows for (in my view) a more powerful design of generics in .NET.

There are some problems which are completely beyond my sight at the moment. I can’t imagine SMTP being replaced in the next 5 years, for instance – which means its use is likely to grow rather than shrink (although probably not across the board,  demographically speaking). Surely that means in 5 years time it’ll be even further away from replacement. However, I find it very hard to imagine that humankind will still be using SMTP in 200 years. It would be pretty sad for us if that were to be the case, certainly. I find myself considering the change to be inevitable and inconceivable, at the same time.

Some technologies are naturally replaceable – or can gradually begin to gather dust without that harming anyone. But should we pay more attention to the end of a technology’s life right from the beginning? How can we design away from technological lock-in? In particular, can we do so while still satisfying the business analysts who tend to like the idea of locking users in? Open formats and protocols etc are clearly part of the consideration, but I don’t think they provide the whole picture.

Transition is often painful for users, and it’s almost always painful to implement too. It’s a natural part of life, however – isn’t it time we got better at it?

6 thoughts on “Critical dead-weight”

  1. I sent this suggestion to Google a while back, seeing as you are going there I thought it might be worth mentioning

    01: I join Google’s new anti-spam mailing system.
    02: Someone on my known list emails me, it comes straight through.
    03: Someone not on my list emails me they get a popup window asking them for their Google account user/pass
    04: $1 is then moved from their account into a holding account, and their email is delivered to me.

    Then either

    A: I see the new email, it is not spam so I mark it as such, the sender gets their money back.

    B: I see the new email is spam so I mark it as such. Google splits the $1 50/50 with me.

    Rather than try to stop spam it might be a good idea to make people pay for it, at least I will get paid from reading emails about how unsatisfied my wife is :-)

    Like

  2. I think you mentioned the chief impediment to open standards in the technology industry today when you mentioned the business analysts, and that’s the question I don’t have any answers for. Most of the standards on your list are classic attempts at open standards, mostly deriving from the whole RFC process (if I’m not mistaken).

    Modern technologies, or at least my exposure to them, seem to be far more driven by particular commercial interests whose primary goal is to get the marketing appeal of an open standard with the lock-in of a closed standard.

    Then again, I may be over-generalizing what you refer to as a technology. :)

    Like

  3. My personal favorite in that list is Old/Deprecated APIs. I’d prefer that API’s like gets, strcpy, etc … be moved to a DLL called banned/deprecated.dll. Won’t happen though because it causes too many problems.

    Like

  4. My favorite example: how much bandwidth is wasted sending all binary email attachments as BASE64, just because some mainframes in the 1970’s couldn’t handle 8-bit data?

    Like

  5. “My favorite example: how much bandwidth is wasted sending all binary email attachments as BASE64, just because some mainframes in the 1970’s couldn’t handle 8-bit data?”

    Most MTAs these days support 8BITMIME, so the answer to that should be (and is) “hardly any”.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s