Future Tense: The Road to HAL

Maximum PC Staff

The Turing Test says that if you can’t tell if you’re exchanging texts with a machine or a human being, then the machine has achieved cognitive ability—it’s thinking.

But based on that definition, and based on the evidence of the comment sections of various websites, then more than half the people posting online are not thinking. (And that may be a generous statistic. You can Google Sturgeon’s Law for a less optimistic assessment.) Too many people are just running tapes—canned responses. Automatic reflexes are simple mechanical operations. Press a button, run a program. There’s no thinking involved, just processing.

Thinking is reasoning ability. We see it in dogs, dolphins, chimpanzees, children, and even the occasional congressman—but that level of reasoning ability occurs at a primal level, it’s simple and direct. The higher functions of what we call rationality and sentience demonstrate themselves in profoundly different ways, recognizable but not easily definable.

Intelligence is generally able to recognize intelligence in action—and that may be one of the defining qualities of intelligence. Not every intelligent being can solve a Rubik’s cube or Fermat’s last theorem, but we can still recognize the intelligence at work in those solutions. The next step, actually designing and creating intelligence requires something else, call it meta-intelligence. We get to step back and think about thinking. We get to deconstruct thinking so we have a clear idea of what we want to build.

Notice that I’m specifically avoiding the term “artificial intelligence.” I don't like the term. It's not just inaccurate, it implies a distinction between the authentic and the simulated. It implies that there is such a thing as real intelligence and suggests that synthesized thought is not really thought all. In which case, the whole search for it is a dead end—because what we're really searching for is the real thing: a rationale for rationality, an understanding of the nature of understanding. This particular quest is a fundamental part of the ultimate existential question, “What does it mean to be a human being?”

We know this much: human beings are tool users. (So are chimpanzees, so let’s not get arrogant.) Language is only one of the tools we use, but it is one of the most useful and powerful of all tools because every idea, every concept, everything we design and build begins as a conversation, as an exercise in language. So we can say that language is the primary way we understand things. It is the primary way that we interact with and manipulate and control anything that will listen—especially other people.

From the moment we’re born, as soon as we come aware of the world around us, as fast as we recognize the difference between me and not-me, we start learning how to manipulate and control the part that is not-me . We learn to hold things, examine them, smell them, taste them, rattle them, and throw them on the floor. Very quickly, we learn that if we cry, someone will feed us or clean us or just keep us company. We learn to manipulate on the most essential levels. And just as quickly as we learn to recognize words, we learn that words can be used as very powerful tools. In fact, we assign enormous power to words, eventually believing that the power is in the words, not how we perceive and use them. We call that relationship with language magical thinking .

For the first two decades of our lives, we develop our skills of manipulation and increase the scale of the tools we use to manipulate: fingers, hammers, game controllers, keyboards, cell phones, steering wheels, and of course, words—polite words, vulgar words, big words, sharp words, nasty words, affectionate words—whatever works. And when words fail us, too many of us resort to other “tools.” Like guns and tanks and bombs. A weapon is just a specific-purpose tool. And the purpose of any tool is to expand our ability to manipulate our environment, so a great deal of what we call communication isn’t communication at all. It’s a polite way of saying, “I’m trying to control you.”

I suggest that intelligence starts with self-awareness—enough self-awareness that survival becomes a critical part of its thinking. Survival is about manipulating and controlling the environment so as to guarantee continuation of self. The scale of a being’s ability to consider the impact and consequences of its behavior on itself and its environment is a way to determine its sentience. How big is its thinking?

Mostly, when we talk about intelligent machines, we’re talking about machines with a DWIM function. (Do What I Mean) We’re talking about machines that will understand what we’re saying and then find or create a way to answer the question or produce that result. Such a machine will do more than just understand language, it will understand intention .

This may very well be the essential core of sentience—the awareness that occurs when intelligence looks outside of itself and discovers other beings. Call it empathy. Call it the ability to recognize that another entity is experiencing, feeling, and thinking. Whatever it is, recognizing the aliveness of another is critical to recognizing intention and therefore critical to sentience. And yes, there are human beings who are incapable of this level of being—the technical term for them is sociopath . And we’ve seen way too much of that in human history. That’s why the search for rationality is so critical. As a species, we could be at the threshold of a transformative moment in sentience.

But an intelligence engine will not mirror the consciousness of human beings. It won’t have the same experiences, so it won’t think like a human—and that will be the most exciting part of the adventure, discovering the differences as well as the similarities. That will lead us to the real understanding of sentience.

Thinking machines—intelligence engines—when we finally start building them, are not going to be just a smarter kind of tool. They will be cognitive entities, fully aware of themselves and very likely demonstrating the same curiosity about the universe that humans do. They will likely also want to manipulate their environment so as to guarantee their survival.

Just as we humans learn quickly how to manipulate each other, I expect that any advanced intelligence we create will also develop the same skills. Skynet (or HAL) isn’t going to kill us. It’s going to find ways to use us.

Oh, hell, even before Skynet exists, it’s already put us to work inventing it...

David Gerrold is a Hugo and Nebula award-winning author. He has written more than 50 books, including "The Man Who Folded Himself" and "When HARLIE Was One," as well as hundreds of short stories and articles.  His autobiographical story "The Martian Child" was the basis of the 2007 movie starring John Cusack and Amanda Peet. He has also written for television, including episodes of Star Trek, Babylon 5, Twilight Zone, and Land Of The Lost. He is best known for creating tribbles, sleestaks, and Chtorrans. In his spare time, he redesigns his website, www.gerrold.com

Around the web

by CPMStar (Sponsored) Free to play

Comments