Fast Forward: Error-Proof Processors

10

Comments

+ Add a Comment
avatar

bautrey

To contradict with your statement at the top, I believe Palem offers a different possibility for the future by purposely designing circuits to be erroneous by building them on Probabilistic Switches (PCMOS). By decreasing the correctness a small amount it is possible to save a ton of energy and be much faster. The loss in correctness can be corrected by algorithms says Palem. This idea works because error and the energy consumed by a switch are related due to thermodynamic laws and the physics of how a switch works.
I found this article in IEEEXtreme. If you have the access to it, its a good and complex read. Most college students should have access to it.

- K. Palem and A. Lingamneni, "What to do about the end of Moore's law, probably!" Design Automation Conference (DAC), 2012 49th ACM/EDAC/IEEE, San Francisco, CA, pp. 924-929.

What a PCMOS is: http://en.wikipedia.org/wiki/PCMOS

(I only know this because I did a paper on different possibilities of future technologies relating to Moore's Law)

Also relating to error, this article show how even small amounts of error can be amplified to produce big erroneous results that effects everyday users.
http://en.wikipedia.org/wiki/Pentium_FDIV_bug
(I'm not trying to relate this to the previous article, I thought it was interesting.)

avatar

Hey.That_Dude

Bringing hardware into the realm of "Garbage In Garbage Out."
I just hope ECC doesn't become the new cache for IC's.

avatar

vrmlbasic

Was the blurb on the main page for this article just meant to be a political jab?

I ask because "trickle down" in technology has always worked.

avatar

USraging

Less accurate calculations can lead to faster processing. There is a thresh hold of bits that can be recreated.

avatar

jgottberg

I thought I read sometime a while back that all that error correcting took away clock cycles of actual processing which caused slower performance. The slower performance is one of the reasons SPARC didn't take off... But that was years ago... Perhaps the higher core count and faster speeds have eliminated that concern.

avatar

TheZomb

This type of error correction uses its own hardware separate from the processing a normal processor does, think of each ecc module like its own core that does only error correction on the specific path its watching, Then imagine a processor which is covered in these things to make sure data from every source is correct. There are some concerns when adding that much hardware between points like propagation delay (each processor component adds delay) that accumulate, but usually as long as their lower than the clock cycle though there is no difference.

avatar

MaximumMike

What you can deduce from the article is that as transistors get smaller (and they are unthinkably small now), things like minor magnetic fluctuations and solar radiation can corrupt data more frequently. Your circuit design might be flawless, but these things can throw a monkey wrench into the mechanism. So, as we get smaller and smaller in the pursuit of speed, we will also have to slow down to make sure our computations are not only fast, but also correct.

At one time, SPARC processors ruled the mainframe world and x86 processors were unheard of in that world. When you're talking about big machines like that, reliability becomes more important. But what Intel showed the world was that faster chips that were a little less reliable would work just as well, with an increase in performance and at a lower price. Inevitably, this strategy buried Sun. It was embarrassing when desktop processors started becoming faster than SPARC processors. But for whatever reason, Sun failed to innovate competitively on the performance of their processors.

Honestly I haven't kept up with Sun since they lost their predominance in that market, other than the Oracle acquisition which I'm sure no one missed. I really hadn't heard anything about SPARC processors in years and had assumed that they had all but vanished. And from the sound of the article, that's pretty close to true.

But I would imagine that when things get small enough there will be a threshold beyond which modern architectures become too unreliable to market without increased error protection. Whoever holds the rights to all that SPARC technology when that threshold is reached will likely stand to make a substantial sum. Or, if Oracle (and I'm assuming they still own the technology) is willing to start endeavoring down that road early, you may find SPARC processors making a big comeback.

avatar

jgottberg

Same here, I haven't kept up on that in quite sometime... I guess you could say I didn't want to waste my clock cycles on it :)

This new Proc sounds awesome with 16 cores, 24MB cache... I kind of think their time is passed now though except in maybe exceptionally specialized applications. Otherwise, if I had to build-out a high density server array for heavy computations, I would be inclined to do so with an x86 architecture with Windows HPC so that "off the shelf" components could be used and a familiar interface could be used to drive it. There are a lot of cost savings for doing that on a lot of levels.

avatar

MaximumMike

I agree.

avatar

MaximumMike

Would appreciate more articles like this one.

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.