Memory is just one of those weird things in computers were even if you improve it, it's returns rapidly diminish. In any case, I stumbled upon this video one time that YouTube decided to throw in my recommended list. In summary, it's a primer on the differences between DDR and GDDR memory. Of course, most of us know the bare basics, but skip to 2:10 if you're in a hurry.
And he brought up a point that most publications I found don't actually say. DDR memory is specialized for low latency, access-many tasks because general purpose applications often have smaller, but numerous requests. Which brings up the point why I included "(and CPUs and GPUs)" in my title. A CPU is heavily generalized and has to handle everything. While we tend to think that CPUs are about computing and such, here's another thing to consider: it has to service the hardware.
When you input something in your keyboard, it calls an interrupt to your CPU. Your CPU then has to switch gears, service the keyboard, and switch back. Among other things as well. All this switching consists of a bunch of small, but numerous memory accesses. And in a given consumer computer, it still has to service these hardware requests regardless of what you're doing. This may be the reason why faster memory doesn't necessarily benefit a system much: the CPU still has to do house keeping that involves stopping what it's doing, servicing it, and returning. This is most likely the case why server racks lack the inputs and outputs you'd find on a regular computer. These I/O devices can eat into the CPU which is already bombarded with network requests.
So what does this have to do with the GPU and GDDR? While for all intents and purposes, a video card is in and of itself a self-contained computer, it's job is to do math. It doesn't know what to do with a keyboard interrupt, or what a USB controller is. All it knows is math, and a few other logic functions. Which leads to what the video said about GDDR memory: it's developed for fast transfer rates. Most memory transfer to a graphics card is basically textures and bulk data sets, and the GPU itself is very good at pushing out high volumes of data. This is also why memory speed in integrated graphics is important.
Plus, GPUs have no concept of paging or virtual memory in the same way a CPU does. I'm going on a whim here, but what's loaded in a GPU is in the same spot for the life that its useful. The GPU no longer has to worry about address table translations and where to look for some item in memory because ASLR is in play. So there's no point for GPU memory controllers to worry about how fast it can service a request. It of course, should service them fast enough, but it shouldn't be it's main focus since memory transfer requests tend to be fewer in number.
I also recall in my half-assed side research that there's some fundamental differences, like DDR3 only has 8-bit prefetches while GDDR5 has 64-bit... or something. But that's lower level stuff.