Future Tense: Thinking About Thinking


Last year, IBM announced that it had built a computer that exceeds the neural capacity of the cortex of a cat.

My first thought on hearing this news was that the world does not need a computer that is snotty, stubborn, and coughs up hairballs on the couch.  (I already have a computer like that, including the hairballs—one of these days, I just gotta clean the fan.)  But fortunately, that was not IBM’s goal.

That same press release went on to say that IBM eventually wants to build a computer that simulates and emulates the abilities of a human brain for sensation, perception, action, interaction and cognition.

And once they accomplish that, why stop there?  If you can build a machine that matches the cortical ability of a human, why not keep going and build a machine that exceeds that by ten times, or a hundred, or as far as you can go before the limitations of the physical universe kick in?

Somewhere along that journey, such a construction may even achieve sentience.  Whatever form that sentience takes, some of it will be familiar, most of it will be astonishing.  The first time I tackled this subject ( When HARLIE Was One ¹ , Ballantine Books, 1972) I assumed that when we do build such a machine, it will be a self-programming problem-solving device and our job will be to teach it.

I assumed then that it will analyze everything we teach it to see if what we are presenting is factual, logical, rational, and accurate to the behavior of the physical universe that it can observe.  Such a machine would possibly discard most of human philosophy and all of human theology as lacking any referents in the observable world.  I expect that it would develop a very secular and pragmatic world-view.  Or maybe not.  I expect to live long enough to be surprised whatever happens.

I expect sentience in computers will be assembled from a variety of necessary cognitive functions.

First, a sentient entity needs to the ability to recognize patterns, not only within data, but also in the stimuli it perceives.  Today’s computers are already pretty good at many kinds of pattern recognition.  Bar-code scanners are ubiquitous.  Omnipage Pro can scan and recognize text.  Dragon Naturally Speaking brings speech recognition to the home computer.  Picasa 3.5 does facial recognition.  Even more sophisticated algorithms have been applied to industrial and forensic and financial uses.  Computer technology will continue to advance in this area, no question.

But these kinds of pattern recognition are built into the software.  The software is designed to analyze and recognize that specific set of patterns.  A genuinely cognitive entity has to be able to recognize patterns within information even when it hasn’t been trained to do so.  It will have to recognize recurring features and analogous conditions.

Computers solve problems differently than human beings.  Think about a jigsaw puzzle.  We can easily design software that takes a brute-force approach and tests every piece of the puzzle against every other piece.  And with enough processing power, it will sort through a billion puzzle pieces much faster than any human can do it.

But that’s not how a human being approaches the problem.  A human being looks for edge pieces, looks for pieces with matching colors, looks for picture clues on the pieces.  Okay, we could add that kind of pattern recognition to our software to speed things up, but it’s still brute force; it lacks the leap of insight.  (And this is where I have to ask, do human beings really have leaps of insight?  Or are we just not recognizing another one of the mechanisms of our own internal software at work?)

Real thinking—real pattern recognition— is about recognizing that all the different objects are actually pieces of a puzzle .  This requires the ability to synthesize concepts.  This requires creativity .

I think creativity is one of those words—like talent—that confuses the issue more than it illuminates it.

My experience of creativity is that you start with something and then see how much you can add to it, how many different ways can you mess it up, diddle with it, tweak it?  How can you turn it inside out and upside-down?  How can you distort it beyond recognition?  What are you still curious about?  What other questions can you still ask?  What jumps out at you?  What’s hiding from you?  What Photoshop filters and plug-ins and effects can you add?  What happens if you sample this track, loop it, slow it down, invert it, and change the beat and the key and add this other piece to it?

The first steps toward that kind of creativity in software have already been demonstrated.  It’s called ‘evolutionary programming.’  The software tests multiple random algorithms, compares their results, selects out those that are useless, and combines bits and pieces of the algorithms that approach the desired result, then starts over.  Mutations can be added along the way.  The software continues to breed new generations of algorithms until the desired result is achieved or even surpassed.

Evolutionary software has designed some bizarre-looking but extremely efficient radio antennas.  It has also generated some extremely powerful, but otherwise incomprehensible sorting algorithms.  (Human beings like things to be neat and orderly.  Evolution doesn’t care.)

This isn’t creativity, it’s focused synthesis .  “How can I change this?”  It may be that this is a key model for how human beings solve problems.  We test possibilities.  We try things out.  We fit pieces together.  We discard the parts of the puzzle that don’t fit.  We build on the parts that do.  And like the machine, we go through the steps faster than we notice what we’re doing.  What do we get when we apply evolution for a specific result?  Larger sweeter strawberries and corn, more playful dogs, faster horses, fatter cows.

This has implications that extend far beyond antennas and sorting algorithms, or strawberries and horses.  What happens when we apply evolutionary software to reinventing evolutionary software?  What happens when we think about thinking for the purpose of reinventing thinking?  What do we become?  Or even more important, what do our machines become?

Synthesis is adaptive, it’s self-modifying, it’s evolutionary—and right now, today, the key piece that’s missing is self-awareness.  Add that and it becomes transformational. What result am I producing?  What am I doing that produces that result?  What result would I rather have?  Who do I have to become to produce that result?  How do I reinvent myself? That particular set of questions is one of the highest levels of sentience.  It’s the consideration of Who am I? and Who do I want to be?

A zen-master of my acquaintance says there’s only one answer to that question.  “Who’s asking?”  Because only a sentient being worries about such a question.

So when IBM finally turns on its machine with the ultra-human cortex, if it asks, “Who am I?”  or  “Who are you?” we may finally have someone or something—a partner in sentience—with whom to share our existential dilemma.

¹ That same novel also introduced the concept of the computer virus to the world.  I am profoundly sorry.


David Gerrold is a Hugo and Nebula award-winning author. He has written more than 50 books, including "The Man Who Folded Himself" and "When HARLIE Was One," as well as hundreds of short stories and articles. His autobiographical story "The Martian Child" was the basis of the 2007 movie starring John Cusack and Amanda Peet. He has also written for television, including episodes of Star Trek, Babylon 5, Twilight Zone, and Land Of The Lost. He is best known for creating tribbles, sleestaks, and Chtorrans. In his spare time, he redesigns his website, www.gerrold.com

Around the web

by CPMStar (Sponsored) Free to play