Future Tense: Alan, We Hardly Knew Ye

norman

Alan Turing should have been knighted. He should have been Sir Alan Turing. Instead he was prosecuted for being homosexual and committed suicide in despair. The British government conveniently forgot that Turing was the genius behind the Allies’ code-breaking efforts during WWII. The “Ultra Secret” is generally credited as the single most important advantage the Allied Forces had against the Axis powers, to the point that Eisenhower was sometimes reading Hitler’s mail even before Hitler.

Fifty-five years after Turing’s death, in response to an Internet campaign, the British government finally got around to acknowledging Alan Turing’s contributions and apologizing for its failure to honor him appropriately. Sorry, guys, but an apology does not erase an egregious wrong.

Alan Turing laid the foundation of modern computing—programmable machines, the separation of software from hardware— the algorithm . Every CPU that you use, your phone, your netbook, your laptop, your desktop, your Xbox or Wii—every single one is a kind of “Turing machine.” It uses a program to process information.

Alan Turing, the father of modern computing

A computer program is nothing more than a list of instructions that the computer automatically follows. The computer does exactly what each instruction demands, nothing more, nothing less. There is no DWIM (Do What I Mean) instruction.

Even though today’s machines seem to have gained an extraordinary level of “intelligence,” the truth is they’re still just following orders, processing lists of instructions—larger instructions, and a lot more of them, and they’re doing it a thousand times faster than thirty years ago—but it’s still the same essential process, diddling ones and zeros.

Alan Turing was no doubt aware of the speed limitations imposed by computers that used electrical relays. Had he not died at the age of 42 in 1954, he most certainly would have lived long enough to marvel at the power of an Apple ][ or a TRS-80. Alan Turing should have had a chance to see the technology that he helped create.

I’m sure he would have laughed to have seen “ ELIZA ,” a primitive computer program that mimicked human interaction. ELIZA was a simple set of string-processing rules. The program picked out key words from sentences and respond, “Tell me more about <keyword>.” But it had a finite set of responses: “How do you feel about <keyword>?” and if it couldn’t find a keyword: “Why do you say that?” Or simply, “I see.”

Interacting with ELIZA is enough like a real conversation to be startling, at least until you began to sense the underlying algorithms. But what if an ELIZA-like program were capable of very sophisticated conversation, rationally dissecting ideas, introducing new thoughts, and making connections that are outside the specifics of what you’ve typed? At what point do the underlying algorithms disappear so completely that you feel you’re conversing with a real person?

A sample conversation with ELIZA

Alan Turing postulated that when you couldn’t tell the difference between a real person and a machine, the machine was thinking . We call that the Turing test and while it’s not the last word on machine intelligence, it continues to be a good place to start the conversation.

No software written has passed the Turing test yet—and based on the evidence quite a few people posting their thoughts on the Internet probably couldn’t either.  (But that’s a different rant. The Internet has already disproved that old saying about a million monkeys typing at a million keyboards for a million years would reproduce the works of Shakespeare. LOL cats, maybe. But Shakespeare, no.)

Many people believe that the development of a genuine intelligence engine is very likely the next step in the evolution of the mind. Not necessarily the human mind, but the idea of mind itself.

Myself, I suspect that there’s a big clue to be found in the Turing test, in the idea of conversation, because Alan Turing seems to have hit on an existential truth—that our thinking exists in our conversations . Even more profound, we define who we are in our conversations about ourselves. (Answer this question: "Who are you?" That's your definition of yourself, that's your conversation of your place in the world.)

PhilosopherJohn Searle's Chinese Room argument

Some contemporary philosophers assert that language is where the mind occurs--language gives us the ability to conceptualize elements of our environment and manipulate the concepts from a specific perspective.

So, by that definition, intelligence is not simply language processing ability—language itself creates a specific kind of intelligence, allowing the mind to process possibilities without having to have actual physical objects to manipulate. Language also allows us to store, transmit, and share intelligence.

Yes, our computers are capable of storing, transmitting and sharing large amounts of raw information. Our computers can manipulate that information by whatever rules we give them--but the gap between that and actual intelligence based on conceptualizing and understanding remains the difficult and elusive part of the problem. It remains one of the most challenging tasks in computer science.

There's still a lot of work to be done. We may still be one or a dozen breakthroughs away from true machine intelligence. Or to put it another way … the road to HAL is paved with good inventions.

David Gerrold is a Hugo and Nebula award-winning author. He has written more than 50 books, including "The Man Who Folded Himself" and "When HARLIE Was One," as well as hundreds of short stories and articles.  His autobiographical story "The Martian Child" was the basis of the 2007 movie starring John Cusack and Amanda Peet. He has also written for television, including episodes of Star Trek, Babylon 5, Twilight Zone, and Land Of The Lost. He is best known for creating tribbles, sleestaks, and Chtorrans. In his spare time, he redesigns his website, www.gerrold.com

Around the web

Comments