Report: Roadblock Ahead for Multi-Core Processors



+ Add a Comment


 Intel is making a step in the right direction with integrated memory controllers? I mean....I'm no AMD fanboy but i have a four or five year old AMD Athlon processor that has an integrated memory controller. I think this is one of the main reasons that AMD had slower(clocked) processors than Intel, but performed much faster. However this is not true nowadays with intels high cache sizes that AMD finally started to implement with their latest processors.

In the hands of a master, any object can become a field improvised, lethal weapon.



I didn't know AMD had IMCs


but as far as more work per cycle, that depends a number of factors. 


Keith E. Whisman

The problem is with the way memory is installed and used on PC's..


Look at the gobs of memory bandwidth that video cards get. Now compare that to the fastest memory available on a X58board with the highest performance setup available is just a small percentage of what a video card has.

What needs to be done is a different interface. We need to quit looking at ram as something that should come as a module card.

Perhaps we need to go back to the old fashioned way of individual memory square memory chips that you can plug into your motherboard. Or memory can come in the form of a card that looks like a CPU with lots of pins on one side that plugs into a ziff socket. Those pins would offer that bandwidth needed. The chip would be located as close as possible to the CPU.

I get the feeling that the memory makers would resist such changes like the plague. Even if we had DDR5 modules the bandwidth would still be a hinderence because of the form factor.


We need to change the way we look at memory. The way we buy memory. It needs to change in order for computer technology evolve.

I believe this and have believed this for a long time. Just look at your video card and imagine what if your CPU had access to that much memory bandwidth.

Someone is holding our computers back on purpose. Video cards don't have limitations. Why should our motherboards be held back...


Everyone needs to get pissed off. You should be pissed. Get mad. Get upset. The only way we are going to see any change is if we show how pissed we are. We are the consumers and consumers drive technology developement. So get mad. 



ziff would probably be nice, that's probably the reason why newer RAM types have more pins.


as for closer physical location that would probobably make virtually no difference, placement based on better airflow would be better 



"is just a small percentage of what a video card has."

A typical setup with a pair of 333mhz ddr2 sticks has a theoretical maximum throughput of over 10GB/s.
In the xbox 360 the maximum theoretical throughput between the GPU and the (shared) memory is around 22 GB/s.

In a radeon 4850 it is around 63 GB/s; or roughly 5 times the bandwidth available from dual-channel 400mhz ddr2. It is very significant, but I wouldn't go so far as calling 19% a small percentage.

Also, the GPU and CPU communicate over a relatively slow PCI express pipe (8GB/s). The importance is not so much the raw bandwidth between proc and ram as the fact that GPU ram usage and CPU ram usage are isolated and don't interfere with one another.

Considering that JEDEC is responsible for keeping both DDR and GDDR standards, and that memory chips for CPU and GPU are often designed/manufactured by the same companies (say Qimonda), I would doubt any foot dragging on the part of memory manufacturers.



small niggle bit We will probably never see DDR more then 3, its just not pratical. What pratical difference is there between DDR4 and DDR2 x2


More then likly (if things don't change as you suggested) it would be motherboards with DDR2+3



DDR4 vs DD2 x2?


no offense but you need to study how memory works.


and memory bandwidth is lagging behind although at least intel is heading towards the right direction with integrated memory controllers 



from a logical standpoint 4 synroyness rows of RAM isnt really all that better from Two Asynronus rows. But more then that you have to explain to the Consumer why they need 4, 5, 6, 7, 8 Pieces of ram just for it to be "efficient"



 By the time they update/create new, re-compile all the programs to use 16 cores, assuming I need 16 cores, assuming I can afford 16 cores, assuming they have re-designed mobos to hold 16 cores, and if GPU's haven't replaced 16 cores... I think they will have sushed out the problem by then.


"There's no time like the future."

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.