David Louis Edelman David Louis Edelman

Building the Perfect User Interface (Part 1)

When I set out to create the world for my Jump 225 Trilogy, as I’ve written elsewhere, I started with a few technological principles:

  1. Imagine that we have virtually inexhaustible sources of energy.
  2. Imagine that we have virtually unlimited computing power.
  3. Imagine that enough time has passed to allow the scientists to adequately take advantage of these things.

I discovered that starting from these basic principles, there are almost unlimited possibilities. You can easily have a world that’s intermeshed with virtual reality. You can create vast computational systems that have billions and billions of self-directing software programs. You can have pliable architecture that automatically adjusts to fit the needs of the people using it. And so on. It’s actually fairly easy to figure out a technological solution to just about any problem if you don’t have those constraints.

science-fiction-machine.jpgThe interesting questions in such a world, then, are questions of interface. You don’t bother to discuss if you can accomplish your goal anymore, because the answer is almost always “yes.” You just need to know how you’re going to accomplish it, and who’s going to pay for it, and what happens when your perfectly achievable goal clashes with someone else’s perfectly achievable goal.

In other words: you’re at point A. You’d like to be at point B. How do you go about getting there?

Note that when I’m talking about user interface, I’m not talking about how you actually get from point A to point B. The interesting thing about this whole new science of interface is that it doesn’t really matter. We can treat all kinds of science and engineering as a simple black box and just skip right over it. What I’m really concerned with at the moment is how human beings translate their desires into actions in the physical world. How do you tell the black box you want to go from point A to point B?

It seems like a ridiculously easy question, but turns out it’s not. Let’s just take a very simple example of a black box that we all know: the toaster. You might think we already have the perfect user interface for toasting bread. You stick bread in a toaster. There’s one big lever that turns the sucker on, and a dial that tells you how dark you want the toast. How can you improve on that?

Well, wait just a second — the desire we’re trying to accomplish here is to take ordinary bread and turn it into toast. And if you think of user interface as the way you go about accomplishing this, the user interface for toasting bread is much more complicated than you might think.

You need to buy a machine to do the toasting, and you need to plug that machine into a power socket. (The right kind of socket for your part of the world.) And not only do you need a bulky machine that takes up counter space, but you need a dedicated machine that really does nothing else but toast bread and the very small number of specialty foods designed to fit in toaster slots. If you’re trying to toast bread in my house, you need to know that the toaster and the microwave are plugged into the same outlet, and using them at the same time will blow the fuse. You need to experiment with every new toaster you buy to find exactly the right setting — and yet, chances are that you burn toast at least once every couple months. How inefficient is all that?

Science fiction machineSo going back to our world with inexhaustible energy and computing power, how would you want to make toast? Would you want to put the piece of bread on a plate, push a button, and instantly have perfect toast? Would you want to bio-engineer a plant that grows perfect toast? Or no, let’s take it a step further — do you want the interface to anticipate that you’re going to want toast and have it already prepared for you? Hell, let’s take it one last step: do you want to just imagine that you’re eating toast through some nanotechnological neural manipulation, when you’re really just eating a hunk of tank-grown nutritional protein supplement?

The science of user interface is a fairly recent branch of knowledge. I’m not sure when it first came into being, but until I find some other contrary piece of evidence, I’m going to guess that the origin of the concept of user interface roughly coincided with the creation of the microprocessor. Why then? Maybe it’s because that’s the point in human history where technology disappeared from sight. You may not understand how the combustion engine works or the ENIAC computer computes, but at least you can look at it. You can actually see how the controls you have interact with the mechanics of the thing. But a microprocessor — well, pop open the chassis of your computer and look at it sometime. It just sits there. (That spinning thing on top is just a fan to disperse the heat.) For all intents and purposes, it is a black box to you and me. Suddenly we can leave the engineering to the engineers and think about that black box from a whole other level.

So I’ve been thinking a lot about user interface, and I’ve been writing a lot of random stuff in random draft WordPress posts. Now I’m going to try to write it all down in some kind of cohesive order. Here are the main questions I’m hoping to explore over the next however-long-it-takes:

  1. What exactly do you mean by user interface? (this article)
  2. What’s wrong with the user interfaces we’ve got now?
  3. Do we need user interface at all?
  4. A quick overview of bad science fiction user interfaces and why they would never work in the real world
  5. What makes the perfect user interface?

(For the insanely curious: the first illustration for this article is by illustrator Frank Paul, and according to this page, dates back to before the invention of the ENIAC; the second illustration, by Charles Schneeman, dates back to Astounding Science Fiction in 1941, according to this page.)

Update 1/31/08: Here’s part 2.

Update 2/10/08: Here’s part 3.

Comments RSS Feed

  1. dr_mandrake on January 31, 2008 at 4:54 pm  Chain link

    Interesting stuff – I look forward to more of these posts :)
    As for the science of user-interfaces, I think it’s older than you suppose. I’m thinking specifically of typewriters. Maybe you can easily discern the physical link between the key and the head, but people spent a long time determining the best way to lay out the keys, and that was back in the 1860s. Just rambling, but food for thought maybe…

  2. David Louis Edelman on January 31, 2008 at 6:35 pm  Chain link

    Good point. Perhaps the typewriter manufacturers were the first usability experts…?

  3. ElaraSophia on February 1, 2008 at 5:09 am  Chain link

    I was thinking about the difference between designing a method of handling a tool to accomplish a task — for example, designing a sword handle, or a medical scalpel, or a telescope — and designing a user interface. I think a valid origin for the concept of user interface could be when we first started trying to make machines that could think. Having the equivalent of a brain outside our own and needing to give it direct instructions on how to think. Thinking about it like that, then the first example I think would be Charles Babbage’s analytical engine.

Add a Comment

I don't censor comments; please don't make me have to start. You can use common HTML tags, such as <b>, <i>, <a>, and <blockquote>. Comments with more than one hyperlink automatically go into the moderation queue. Your information will not be rented or sold, ever.