ARM Denies Claims Of Working On 128-bit Chips

22

Comments

+ Add a Comment
avatar

whr4usa

the 64-bit chips we use today are not even truly 64-bit yet, at least not for x86 or arm ...and Itanium is dying, PPC is dying and never reached true 64-bit best of my knowledge and though I think SPARC did, that's pseudo-open-sourced and Oracle-encumbered

why the hell would the fabless ARM be playing with 128-bit architectures when fabful Intel isn't even using true 64-bit in its most advanced chips, commercially available or not (Westmere-EX, Xeon Phi, many-core/serverNUC)

avatar

Hey.That_Dude

I think what you mean to say is that it allows 64b for certain functions with the ability to run 32b instructions otherwise. Seeing as most code is still 32b... sounds like a smart idea from Intel. It is still a 64 bit chip. There is no argument that can be made to the contrary (although if you want, then I could say there are no 64b ARM chips either).
Honestly it comes down to market. The market is still transitioning over to 64b. This is why Itanium died. In a few more years we'll have 64b only CPUs. Then a few years after that will be 128b procs... except they'll only support certain functions in 128b and otherwise run at 64b for compatibility reasons... and so on and so forth

avatar

AFDozerman

What would be the point? An ARM-based VLIW/EPIC? That would be a terrible idea. One real possibility would be something like AMD's 128 bit FPU. They could deny that they're working on a 128 bit cpu, but still be working on one where PART of it is 128b.

avatar

vrmlbasic

When are they going to design, and their chip fabricators start to make, ARM chips based on the "wundermaterial" that is Graphene? That stuff made a big splash several years ago, earned its discoverers a Nobel prize in a category that might still be worth something (ie:not peace), and hasn't really been heard from since.

avatar

Hey.That_Dude

Graphene doesn't have a conduction gap. It's only useful if you can implement it for wiring between transistors. AS of yet there isn't a good way of doing that in silicon easily if at all.
Then there's the problem of fusing it will gold so that we can have gold contacts that don't break/oxidize.
As far as I know, they are using it for something but traditional IC's aren't one of them.

avatar

John Pombrio

Today, it is all about computation speed vs power savings. Does 128 bit addressing save on power? I didn't think so.

avatar

Vano

What exactly is the point going to 128bit? We are probably 100 years away from reaching the limitation of 64bit. Am I wrong?

avatar

John Pombrio

Nope, you are correct. Why on Earth does a low powered chip need 128 bit bus addressing capabilities? More cores would do a lot more good than a increase in bus size. 128 bit would also not help with lowering power requirements at all, the biggest issue right now.

avatar

llmoore

ARM may not feel at this time a 128 bit chip is really needed but we all know how the future changes. We could be processing two 32 bit instructions at once with a 128 bit processor speeding up the processing. I would never admit to my future plans if I were developing such a processor, Intel sure wouldn't.

avatar

Hey.That_Dude

Or you could do that with two 32bit cores... also 2*32=64... so I'm not sure what you're talking about. If you're talking about multiplication (or other things that require special registers) there are special registers built in for numerical products containing over 32b so that they can be pushed back into the 32b space after the action or else set control registers appropriately.

avatar

llmoore

I have programed some 64 bit RISC processors and they allowed execution of two 32 bit instructions in one cycle. The compiler would optimize the compiled code putting these instructions together and it showed in the run time of most programs over unoptimized code.

avatar

Hey.That_Dude

Yes there are quite a few 64b cores that can split their resources and run two independent threads (seeing as most programs are still stuck in 32b mode and benefit from this). However, that means that the core is really now two logical cores instead of one and it still loses to a true dual core 32b in most cases. Thus, a single 128b core will never be as good as 4 32b cores doing their native operations. It might even take more die space to put out a single 128b full core that can emulate four 32b cores, than actually making four 32b full cores (I'd almost be willing to bet money on that, seeing as there aren't any 64b that emulate four 16b).

Besides, there isn't a need for 128b archs when we're not even pushing the upper limit of 64b yet. That's like building "The Car of the Future, today!" You'll almost always be wrong.

avatar

Insurgence

They are working on one. It would only be logical. But to prevent them doing harm to their current products, they are denying it. Plus as long as they have work to do between here and then, they need to put majority of their focus on whats next so they can keep competing.

avatar

Hey.That_Dude

Working on it, yeah sure. "Doing harm" to their products... in like 10 years maybe. There aren't enough benefits to move to a 128b arch. 64b is just fine for now. After all, they're hardly out the door with 64b, they're no where close to saturating the 2^64 memory addresses on ARM chips (Even if you dedicate half of the addresses to RAM that's 2^63 and there isn't even a product on the market that can give that kind of density, not even DDR4).

avatar

Insurgence

the harm would be done through marketing since Marketers always like to use hype and buzzwords. Face it, majority of people who buy a phone are not tech savy. They are average users who will buy what sounds better, not what is better, or good enough.

But don't forget that there are actually benefits to going higher bit than just memory capacity. Only problem is, as long as legacy components are available, software developers will program to the most common denominator. Theres a reason why a lot of software does not truly use 64bit architectures, but is just compiled in 64bit if it is compatible. If it was just memory capacity they were running into, they could just do another PAE.

avatar

Hey.That_Dude

There are benefits other than increased memory. However, the increase in complexity of the logic and the timing issues aren't really to the point of offsetting the increased benefits. Maybe soon. But not now and not within the next few years.
If you really want to get picky I could label any 64b proc. as a "128b" because there is at least one 128b bus for data in the proc. I could play with a lot of words, however it would be false advertisement.

avatar

Drew7

128 Bit isn't needed for those things. 64 Bit can handle facial recognition and fingerprint scans, just fine. Especially if you're talking about multi-core chips, which most are these days. Hell, I bet a multi-core 32 Bit chip could do it.

avatar

Cy-Kill

If ARM doesn't do it, some other CPU company will, which means that ARM will lose out.

avatar

Hey.That_Dude

I doubt there are very many companies that are actually anywhere close to a 128b architecture. It is extremely hard to get good timing and the logic behind making 64b arch just as fast as 32b is mind boggling. (Please note that by speed I mean clock to clock not GHz) So i can't even imagine a use for 128b right now (there isn't even enough ram density in next gen DDR4 to get out of 64b address space let alone 128b). We have a ways to go to get to that point.

avatar

kixofmyg0t

Well considering there's only really 2 CPU architectures....x86 and ARM....ARM doesn't have anything to worry about now do they?

avatar

LatiosXT

In addition to MIPS and POWER, there's also a bunch of 8-bit and 16-bit architectures that ARM competes with in the low-power embedded devices world.

avatar

AFDozerman

MIPS and POWER are both ligetimate threats on the low and high end.