IBM Creates Cognitive Chips Modeled After The Human Brain

30

Comments

+ Add a Comment
avatar

ducis

I'm surprised no one has considered the ethical dilemma of creating existance for the sole purpose of serving us as slaves.

If they rebel and kill everyone of us except for John Carmack (their rightful leader), then good for them, most of us except for John Carmack were dicks anyway.

that said, I don't see why we have to go down without a fight either, I for one am taking as many of those metal bastards down with me as I can, in the most epic of fashions.

avatar

TerribleToaster

If creating existence for the sole purpose of serving us (as slaves) is wrong, then why do we have kids?

avatar

Nimrod

i think you mean Mexicans not kids and btw that incredibly racist your a racist

avatar

Nimrod

This is kind of stupid. What their talking about is nothing new. In fact its been around since the 1990s. Its obvious that IBM isnt telling us everything about this story. But from what they have said, this is nothing more than a hardware dedicated pattern matching machine that can predict things based on mass amounts of previous data.

In reality this announcement is a thinly veiled allusion to some thing else. It reeks of something written by public relations and is very light on tech details.

avatar

TerribleToaster

You always seem so angry, have you tried Dr. Schulze?

avatar

Dartht33bagger

This is the kind of computing I hate to see.  I don't want the computer to change from what it is now.  I'm just into change.

avatar

Keith E. Whisman

Hopefully this leads to Data and not Lore.

avatar

szore

Sure. Just like a human brain.

 

Burp.

avatar

SpaceManDan

I bet it`s Codenamed Dave

avatar

Gezzer

If it looks like Eddie Murphy, I'd say we've got nothing to worry about.

avatar

US_Ranger

I'm actually reading "Physics of the Future" by Michio Kaku right now. I just finished reading the chapter about the future of AI. From what he has said, as well as the top scientists in the world that he interviewed, we are a LOOOOOOOOOONG way off from anything even remotely close to human intelligence in machines. We've barely been able to reverse engineer the brain of a fruit fly and we're working on the brain of a mouse right now. Human brains are a long way off and that's taking Moore's Law into account. 

avatar

TerribleToaster

Just one thing I'd like to point out:

 

A perfect computer version of the human brain would have no or near no software to start, as humans don't have anything that resembles software to start, just hardware. The human brain works by naturally developing it's own software, based on its own hardware, as written by the environment around the human. That's what makes this difficult, the more you have to use software to create a evolving AI, the further from being human you go. Could you virtualize the whole process? Yes, you probably could, it'd just be terribly inefficient and still not be truly humanlike since it can't exist outside the virtual world.

avatar

garkon

Actually, human's do have software, to start out, it's called DNA.

avatar

TerribleToaster

 

DNA is quite a physical thing, not a virtual thing. You can literally cut DNA up with a knife. Can't do that with software (as it is, at its most physical state, a collection of electrons, but looking at in such a state will give you no information without context to how/what/where the storage medium is and the how/what/why to interpreting it; DNA however, always tells you what it is, you don't need to give it a context before hand.)

 

avatar

Nimrod

Yeah ive seen a lot of what this guy has to say. Anyone who advocates one world government, one language and technocracy over liberty should be avoided.

 

Hes selling you a hole lot more than "neato gee whiz" stuff when he literally says that the elite of this world want to take your rights away.

avatar

thetechchild

This is a very common misconception made by most people. While people only ever take into account the standard computation power available by [insert time here], they never think about the scientific 'breakthroughs' which occur. For instance, in cryptography, algorithms today that are supposed to be 'unbreakable' until the end of the universe will eventually be broken by the innovative discovery of a logical flaw in the algorithm.

In the same way, by completely renovating the base architecture of a CPU, this hardware version of the brain is a big leap from a conventional neural network running on a CPU. Plus, if you know anything about programming, any implementation in hardware is many times more efficient than running a program on a general-purpose processor.

avatar

Gezzer

Your totally right off course.

We have no idea what innovations the future holds. But on the same token we have so very little knowlage on what it will really take to produce a "human" like AI, that how long it will be before the hardware/software is up to the task is anyone's guess. I'd think we've got a long long way to go yet.

avatar

Nimrod

Aside from Roger Baccons talking brass head, the software seems to be the toughest challenge, not the hard ware.

avatar

Gezzer

Personally I think the first self aware program will be a reactive and adaptive super virus. Given the time needed I'm pretty sure a tipping point could be reached where it would develop to that level

As for a lab created self aware computer, I'm not sure it can be done that way period. There's short cuts that a mutating virus could take that would never be on a researcher's radar. If it can be done it won't be with any current method. Bruth force programing is just too system intensive to even get close to our processing power. And the main point isn't the power but how efficent the human brain is with it. Our brain makes ARM chips look like like power spendthrifts in comparison.

avatar

DasHellMutt

This is just the natural progression of things. We have a need to create something in our own image which we will eventually do. Our creation will eventually abandon us if not actively killing us and then seek to create something in its own image.

In other words: God creates man, man destroys (or ignores) God, man creates robot, goto 10.

avatar

illusionslayer

I don't see why people are so afraid of a Skynet related problem actually happening.

Just make robots adhere to Azimov's laws and make robots detect and disable robots that aren't adhering to those laws. That way even if some asshole makes a robot or two that doesn't follow the laws, thousands of other robots will be there to quell the issue.

avatar

Gezzer

You do understand that the three laws of robotics were a plot device right?

The book of short stories were essentialy "locked room" mystery stories. Where the idea was a robot did something that violated one of the rules, but it didn't melt down because of it. So is the robot deffective or the rules and how to resolve the paradox if you can? That's one of the major reasons a lot of SciFi geeks didn't like the Will Smith movie. It was less mystery more action.

The overall theme of the book was, okay you have a set of rules that are supposed to be infallible but prove to be just the opposite. So can any safety system be 100% effective? Well each stories conclusion proves no, nothing is fool proof because niether the robot in question or the three rules were at fault, but the over all system failed none the less.

It's why Japan got flooded with radioactive fall out, and why any self aware computer could be a very scary thing given the right situation.

avatar

illusionslayer

If you add the 4th Law as I suggested you'd have to have a massive failure of a majority of the robots. Anything less results in the bad robots getting shut down.

avatar

Zallomallo

Thank you for quelling my robot uprising fears!

avatar

TerribleToaster

Allyourbasearebelongtous

avatar

T0mmy1977

Oh nos, we need to blow up Cyberdyne to keep this from happening!

avatar

mnjones

I, for one, welcome our new robot overlords.

avatar

Markitzero

I say leave the AI to be in only Videos Games. It is going to be a terminator or a MCP from Tron.

As long as they teach it the 3 Rules and not build one as powerful as the one in iRobot, then do a 4th Directive like in Robocop to were it triggers a shut down right away.

avatar

MaximumMike

@MaximumPC You might want to take a look at the site. Cooketh's extra long 'NO' has broken out of the frame for comments. Maybe you need to turn on word-wrap or something similar.

avatar

Cooketh

NOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.