It only took about 65 years, but a computer posing as a 13-year-old boy became the first to pass the Turing Test, which was designed to test a machine's ability to exhibit intelligent behavior that's equal to or indistinguishable from that of a real human. A supercomputer called Eugene Goostman accomplished the feat during Turing Test 2014 held at the Royal Society in London over the weekend.
Google has purchased DeepMind and confirmed the business transaction to technology website ReCode, though declined to reveal the amount of money paid. According to the website, though, an initial amount of $400 million was offered for the London-based company.
Intel's placing its bets on more than just the company's top-notch fabrication facilities; the company apparently has a stake in creating future generations of robot overlords, as well. Less than a month ago, Intel unveiled a new research project designed to make technology that's smart enough to learn its user's personal quirks and adapt accordingly; then just last week, Intel researchers published a proposal for a new, neuromorphic chip design -- hardware that mimics the human brain.
True artificial intelligence (AI) is one of the many Holy Grails of computing, and on a much smaller scale, AI is what separates potentially awesome games from really crappy ones. John McCarthy, a math geek born in Boston, coined the term "Artificial Intelligence" at a conference at Dartmouth College way back in 1956 and was a major pioneer in the field of AI research. He passed away this week at the age of 84, Stanford Engineering said in a Twitter post.
The Terminator movies are entertaining and all, but they forget to point out one important fact in the midst of all the cybernetic shotgunning; if Skynet is ever going to actually become self-aware, it'll probably require a drastic change in the way computers process information. Hey, James Cameron – don't sweat it. IBM has your back. The company just announced it's created a series of prototype "chips designed to emulate the brain’s abilities for perception, action and cognition." We suspect they'll also be the key to the eventual robot revolution.
The Turing Test says that if you can’t tell if you’re exchanging texts with a machine or a human being, then the machine has achieved cognitive ability—it’s thinking.
But based on that definition, and based on the evidence of the comment sections of various websites, then more than half the people posting online are not thinking. (And that may be a generous statistic. You can Google Sturgeon’s Law for a less optimistic assessment.) Too many people are just running tapes—canned responses. Automatic reflexes are simple mechanical operations. Press a button, run a program. There’s no thinking involved, just processing.
Thinking is reasoning ability. We see it in dogs, dolphins, chimpanzees, children, and even the occasional congressman—but that level of reasoning ability occurs at a primal level, it’s simple and direct. The higher functions of what we call rationality and sentience demonstrate themselves in profoundly different ways, recognizable but not easily definable.
Intelligence is generally able to recognize intelligence in action—and that may be one of the defining qualities of intelligence. Not every intelligent being can solve a Rubik’s cube or Fermat’s last theorem, but we can still recognize the intelligence at work in those solutions. The next step, actually designing and creating intelligence requires something else, call it meta-intelligence. We get to step back and think about thinking. We get to deconstruct thinking so we have a clear idea of what we want to build.
The term "artificial intelligence," however, is inaccurate.
Alan Turing should have been knighted. He should have been Sir Alan Turing. Instead he was prosecuted for being homosexual and committed suicide in despair. The British government conveniently forgot that Turing was the genius behind the Allies’ code-breaking efforts during WWII. The “Ultra Secret” is generally credited as the single most important advantage the Allied Forces had against the Axis powers, to the point that Eisenhower was sometimes reading Hitler’s mail even before Hitler.
Fifty-five years after Turing’s death, in response to an Internet campaign, the British government finally got around to acknowledging Alan Turing’s contributions and apologizing for its failure to honor him appropriately.
Sorry, guys, but an apology does not erase an egregious wrong.
Having apparently run out of actual people to talk to, New Scientist has posted an interview with Elbot, the chatbot that won this year’s Loebner Prize for artificial intelligence. Structured after the Turing Test, the prize is awarded to whichever bot can fool the most of the 12 judges into thinking that it’s a real person.
Elbot successfully convinced three judges that it was not a chatbot, but rather a human being pretending to be a robot.
Confused? Check out this excerpt from the interview: New Scientist: You and your creator won $3000 of prize money. How do you plan to use the money?
Elbot: As I always say, it’s hard to keep a 600-pound robot down, unless you use gravity.
With natural, sensible dialogue like that, I don’t know how any of the judges could have not been fooled. On that note, if anyone is in need of a quick buck, we suggest entering a chatbot next year that pretends to be a man banging his head against the keyboard.
Anyone who wants to can chat with Elbot; give it a try and tell us what you think after the break.
Intel just passed its 40th anniversary and the nostalgic occasion had CTO Justin Rattner musing about the future of technology. He foresees new breakthroughs in medical technologies, specifically with regards to nanoscale chips capable of moving through our bodies. Additionally, he predicts more realistic robotic intelligence, and a blurring of reality between humans and machines. Chuckle if you may, but in his 35 years at Intel, Rattner has witnessed some pretty amazing advances in technology, many of which Intel was at the forefront of. When the microprocessors first debuted in 1971 they contained about 2,300 transistors. It has since ballooned to over 820 million and the personal computer has become ubiquitous in our everyday lives.If Moore’s law holds true, and we have no reason to think otherwise, the future may indeed be a very different reality from what we understand today. According to Rattner, “In the next 40 years, computer chips will extend beyond our computers and phones, as people want to become more entrenched in virtual worlds and computers learn to react to our motions and thoughts.”
So what do you think the future holds? Hit the jump and let us know!