IBM Develops Neurosynaptic Chip, Functions Like the Human Brain

21

Comments

+ Add a Comment
avatar

MaximumMike

This could be really really bad, but not in the way you all think. Neural computing isn't quite headed down the road of making electronic brains (though it's not implausible that it could).

The point of neural computing is to help computers with complex recognition techniques. For instance, I can take a picture of your grandmother, rip pieces off of it, drench it in water, and burn parts of it, and you would likely still be able to recognize it as your grandmother if enough of the image remained. Modern computers cannot.
Heck, simple capthca's still fool most computers, but most people are easily able to identify the letters in the captcha.

This is because the way the human brain processes information is very very different from how computers process it. The point of neural computing is to emulate the relationships between synapses and neurons and the basic functions of the brain that process, store, relate, and retrieve information.

Though I am certain this field of study will lead into many other avenues, one of the chief benefits of a neural model is in recognition techniques. If you think the spam on this website is bad, just wait until neural computers are common. It's going to be interesting, as neural computers will likely be able to easily read and bypass such primitive spam filtering. Furthermore, spam filters using a neural architecture will be much better at identifying and cleaning up spam as it arrives.

But how much of this do we really want? How much do we really want Google to decide which emails we should and shouldn't be getting before we even see them? Clearly it's happening today with junk mail folders. But the next logical step is for the email to be eradicated without our knowledge that we ever even received it. This would certainly be a strong blow against spam, and make the tons of cash currently being dumped into developing spam networks seem futile. But are we really willing to turn the management of our mailboxes entirely over to machines?

There could also be much worse on the horizon. Facebook is already notoriously known for its lack of security and pervasive invasion of privacy. Howe much worse will social networking be when you no longer have to tag people in photographs because Facebook automatically tags everyone in not just photographs, but also in videos?

And then there's the federal government which currently rates worse than China in some aspects of its domestic spying. How much worse will things become if they get their hands on advanced versions of neural technology?

Forget Skynet. We have a plenty to worry about with what this technology will enable human beings to do.

avatar

dgrmouse

This is neither here nor there, but FYI a computer could potentially be much more effective at recognizing the person in the photograph than a human. There are a number of invariants (things that don't change depending on distance or orientation, like iris patterns or the w/h ratios of elements) that computer vision packages can discriminate with tremendous speed and accuracy.

Neural computing models aren't really such a big deal. Deciding when to use them is, though. Sometimes, it's much more difficult to determine when the "best" answer is preferable to an approximation than it is to just grind away at the solution. And there are plenty of other models we can use to approximate answers. It is fun to stir the pot with talk about Skynet, though, isn't it?

avatar

jgottberg

The chip will become self-aware.

avatar

LatiosXT

I find it funny most people think AI is going to result in Skynet.

Is it going to result that way because we're going to be so freaking paranoid in the future about AI that in reality, the AI "retaliate" out of self defense?

You should all really stop this mentality now.

avatar

dgrmouse

So, your brand of paranoia trumps that of everyone else. For my part, I'll say that caution seems pragmatic. Especially if you're going to put an AI in a position to '"retaliate" out of self defense.'

avatar

don2041

Imagine Skynet, Now imagine Skynet on PMS, now imagine arguing with it.

avatar

Carey934

Great, it can function like a human brain? So suddenly it decides it's not in the mood? Or be depressed like Marvin the depressed robot from Hitchhikers Guide to the Galaxy?! Shut up and take my money!!!

avatar

darkstorm977

I think if a chip such as this were to for example, read and translate brain communication and function into computer language and reaction of said host, then that would bring us closer to a wireless free of communication devices such as keyboards and ect and make virtual reality into an awesome leap toward the future.

avatar

LatiosXT

This chip is designed to take software based neural networks, which none of which are capable of doing a smidgen of what primitive primates can do, and put it on hardware. This is more or less to solve problems that are annoyingly hard for binary computers to do, namely pattern matching. Another article about this says the chip can figure things out in a 400x300@30FPS video in real time.

But I wouldn't really worry about anything until they can build a computer that can finally compete with some of the best Go players in the world.

avatar

Gezzer

I have a few questions and you sound like you know a bit about this field. How does this compare to normal brain matter? Does it have 10% of the capabilities of the human brain? 1%? 0.1%? Or are we talking about flat worm territory? How programmable is it? Are we talking about something that makes the PS3's cell processors seem like child's play in comparison? Or will it be easy to actually make the chip do something some what useful? For that matter, is it a research only chip or will there be real world applications in the not too distant future, and what might those be?

avatar

LatiosXT

It has a lot less than 1% of the "thinking capability" of a human brain from what I can gather. However, this chip is highly specialized and needs data fed by an FPGA. I don't suspect we'll have something that's generalized and can learn like we can any time soon.

I suspect pattern matching is one of the easiest tasks we can get it work with, because that's how human brains work.

avatar

The Mac

and so begins Skynet....

avatar

LatiosXT

Tongue and cheek comment aside, there's no way the military will allow something like Skynet to happen anyway. The military hates systems too complex for their average grunt to use. The marines especially.

avatar

Xenite

That was the whole point of Skynet, It was decided that humans being to error prone. Decisions would be best left to a complex computer program.

We are really not that far off from autonomous drones. A predator drone that can fly 24/7 that doesn't need sleep, or lunch or a coffee break.

avatar

LatiosXT

Actually SkyNet violates one thing about military systems: the one who pulls the trigger is going to be a human, not a machine.

Also drone operators may not be directly flying the plane, they still have to man it. It's not like they're allowed to set a mission and go goof off for a few hours. Also if the drone is armed, again, a human has to pull the trigger.

For all the US relies on sophisticated systems, they're not dumb enough to allow unpredictable software fire their weapons

avatar

Rhinoflyer14

As an officer in said military I am not so sure about the whole not so dumb thing. Upper leadership....ie politicians, would love to take a human with thought and feeling out of the equation.

avatar

dgrmouse

It's a moot point, since we're almost to a point where nothing can be built without using network-aware computers. Add AI to the mix, and it could potentially be difficult to prevent machines from making their own machines. After all, it's pretty hard to spot a slight change in an otherwise functional CPU.

avatar

AFDozerman

Great. Now build it with graphene.

avatar

AFDozerman

Great. Now build it with graphene.

avatar

BlazePC

Coming soon to an implant station, near you.

avatar

EdgeTrigger

Another day, another useless comment.