Friday, January 27, 2012

(empo-tymshft) Why Bad Predictions Happen (the "five computers" prediction)

Mark J. Fletcher wrote a recent blog post on bad technology positions. While the bulk of his post dealt with a bad prediction about E 911, he started off with a classic bad prediction from Thomas Watson the elder.

"I think there is a world market for maybe five computers." - Thomas Watson, chairman of IBM, 1943.

Now some people like to look at bad predictions and laugh at how stupid the predicter was. In my case, I'd rather put myself back in the shoes of the person who made the bad prediction, and try to understand what motivated the person to think as he or she did.

I've even done this to myself, including the time in 2009 that I was forced to retract my prediction that Oracle and Sun would not collaborate until 2010 or 2011, or after final approval of the Oracle-Sun merger.

But let's look at the environment in which Thomas Watson made his oft-repeated prediction...if he made it. There's no evidence that Thomas Watson the elder actually said such a thing; perhaps the statement, or a similar statement, was made by Douglas Hartree or Howard Aiken or Thomas Watson Jr., and the statement might have been made not in 1943, but in 1951 or 1952 or 1953.

So let's assume that SOMEONE made a statement, perhaps in the early 1950s, claiming that a few computers could take care of the world's computing needs. How could someone have such a misguided conception?

Well, actually it was pretty easy. Remember that at the time, computers were big hunks of metal that were pretty much huge calculating machines. (Actually, the computers of today are also huge calculating machines; it's just that today's computers calculate Farmville crop yields instead of missile path projections.) These huge calculating machines were expensive, hard to maintain (all the bugs, you know), and required great effort to input data and receive the data output.

If this is the way in which a person of 1953 thought of the computer, then it would be extremely difficult to conceive of something radically different.

In that year of 1953, a young boy named Albert Gore, Jr. was five years old. Now even though Gore's father was a powerful man - he had just been elected to the U.S. Senate in 1952 - it's ludicrous to imagine Senator Gore getting a computer for his young son. Where would the Gore family put their computer? Where would they get someone to build the raised floor and the cooling system? How long would it take the kid to figure out how to input stuff into the computer? Where would the family find a machine-language coder to create programs for kid to run on the computer?

And, come to think of it, why would you want to create programs for a kid? Computers were good for doing complex military calculations, and for undertaking tasks such as compiling census data for the United States. Only an idiot would take the time to program a computer to play solitaire or something like that.

It wasn't until 1954 that computers were really seen outside of government circles. In December 1954, John Hancock (the life insurance people) got their hands on a state-of-the-art IBM 650 computer. Early IBM 650 computers were extremely powerful, and had memory capacities of 2,000 words (or 10,000 characters) - not enough for anyone to manage a simulated farm. And even if it had much more memory, the 22-33 operations per second speed meant that you couldn't do all that much.

Yet even the (by modern eyes) meager capabilities of the IBM 650 resulted in a revolution in computing that couldn't have been predicted back in 1953. By 1962, IBM 650 computers were flying off the shelves, and an amazing 2,000 computers had been sold.

And all of that happened before computers became more powerful, and much smaller, and much easier to use. By the time I used my first computer in the 1970s - a machine with a single-line display that was dedicated to running BASIC programs - computers were able to do all sorts of things that people in the early 1950s just could never envision.

And it just went on from there.

How could a person in 1953 have predicted that within 50 years, hundreds of millions of people would own "computers" that didn't require special cooling, a special room, or punched card entry. Could a person of 1953 have imagined a strange cross between a typewriter and a television - a COLOR television - with a display that looked like a piece of paper? And could the person in 1953 imagine that a computer user of 2003 would effortlessly combine text with pictures, send the result to another computer thousands of miles away, and enable a different person in a different location to see that text-picture combination within seconds?

In fact, the person of 1953 would probably say, "That's not a computer!" It's a...it's a...well, what is it?

Perhaps these little tiny things aren't computers, but computer terminals that tie into computers elsewhere which are the REAL computers. And as the pendulum swings toward cloud computing, John Battelle revisited the early prediction:

The march to cloud computing and the rush of companies building brands and services where both enterprises and consumers can park their compute needs is palpable. And over the next ten or so years, I wonder if perhaps the market won’t shake out in such a way that we have just a handful of “computers” – brands we trust to manage our personal and our work storage, processing, and creation tasks. We may access these brands through any number of interfaces, but the computation, in the manner Watson would have understood it, happens on massively parallel grids which are managed, competitively, by just a few companies.

Instead of having just a few computers at selected government agencies, we might just have computers at Facebook, Google, and a few other places. Battelle doesn't believe that Facebook will make the cut; he thinks that the five companies might be

...Google, Amazon, Microsoft, Apple, and...IBM.

But if he's wrong, I'm certainly not going to slam him for his inaccurate prediction.
blog comments powered by Disqus