Graphics Guru Richard Huddy Leaves Intel to Rejoin AMD

19

Comments

+ Add a Comment
avatar

tatsujin

You're doing it wrong, AMD. You need to focus on reducing power consumption and increasing performance per thread/core - which you literally suck at.

Looking forward to Broadwell and Skylake. Oh, and I don't need my GPU inside my processor. I'll be replacing my two GTX 560 Ti SC with a single GTX 880 SC later on this year.

avatar

PCLinuxguy

You do realize Intel has been doing this for some time now as well right (though, Intel's IGP isn't as strong as AMD's) ?

avatar

vrmlbasic

Someone missed the HSA memo. :)

avatar

wumpus

Personally I think AMD needs to get on the HSA bandwagon, but that might be more up to Microsoft.

Nevermind the GPU on the same silicon as the CPU (I'm not against it, until you reduce the bandwidth), but how about a few bobcat/jaguars on the chip? They're what, 3mm a pop? Drop a four pack or two on a steamroller processor and you might have enough cycles to compete with Intel.

No, I don't know where the bandwidth will come from. Ask Nvidia about their stacked RAM :) Yes, you still need to fix the power per thread. So said Saint Seymour, and it still is true.

avatar

LatiosXT

AMD created the HSA standard.

avatar

AFDozerman

Can't wait to see what these recently hinted at post-excavator APUs are going to be like. AMD now has a lot of major talent behind their CPU and GPU divisions. Things are really starting to look good for the red team after all these years, although Devil's Canyon may lure me away before then.

avatar

MAIZE1951

Now if only Intel would integrate NVidia graphics with Intel's CPUs, then Intel could seriously compete with AMDs APUs.

avatar

LatiosXT

Except NVIDIA is already their competitor in a lot fields, and the consumers would lose anyway as there's less competition.

They could always just take their high-end iGPUs and figure out a way to stuff them into lower end SKUs. What I'd give to see an Iris Pro on a dinky Atom.

avatar

AFDozerman

Why would you want to see that? It would just look like the APUs in the PS4 and XBone minus a few cores.

avatar

LatiosXT

Because why not? Why should the higher end iGPUs reside on high end chips?

avatar

vrmlbasic

Because you've laughed at Mantle (using the same logic that 3DFX used to laugh at HW T&L...) and technology similar to Mantle would be required to prevent the anemic Atom from bottlenecking even that iGPU.

avatar

LatiosXT

I never said Mantle was a terrible idea. I was skeptical that it would deliver like AMD promised. And so far it hasn't. And few people actually tested CPU dependent rendering with it. You're supposed to test CPU bottlenecking by lowering the details to the minimum, thereby taking the GPU bottleneck away from the equation as much as possible.

And yes, the Atom Z3000 may be anemic compared to higher end processors, but it's no slouch either. I have a consistently pleasant experience on my Win8 tablet. I mean hell, I was able to play Shadowrun Returns for a good while on it without any hiccups or slowdowns. Sure it's not some gaming juggernaut like Metro LL, but for what it is, that's plenty enough for me.

Heck it runs Half Life 2 just fine.

EDIT:
It seems like some of you think I hate AMD. I don't. I just don't consider them worthwhile to use for my builds. AMD's design goals as of late hardware wise seems to be maximum performance at all costs. Which is fine and all, but I like my parts to be efficient. I want a part with as of a high performance per watt/temperature ratio. And not only that, I consider the scenarios for which I will use the computer.

AMD's designs also focus a lot on compute performance, which is good and all, but that only really affects those who need compute performance. Since I'm not running things that crunch numbers all day, whether that be F@H, video editing, or something else, I have no need for AMD's compute performance. Compute performance also doesn't automatically equate to general performance.

Since the most complex thing I'll do with my computer is running games, which still appears to favor single threaded performance, I will go with that. And since the rest of my tasks are event driven, I want a processor that sips power and outputs little heat when idle.

Sorry, but AMD doesn't fit my requirements. And honestly, if AMD wants to be taken seriously in mobile, they need to catch up on efficiency very quickly. They have well entrenched opponents: Qualcomm (especially Qualcomm), Intel, and NVIDIA.

avatar

btdog

It's funny you say that because I thought you really did hate AMD. I completely understand your rationale and you have legitimate points. I try to offer an AMD option when someone is open to the idea because:
a) We can all chant that we need competition but if we don't support the little guy, then they don't have a chance
b) On the lower-end, AMD offers some very compelling components that are reasonably priced. When I build an AMD-based computer, I'm usually pleasantly surprised at the results

avatar

MAIZE1951

I do use an AMD A6 5400K APU for my HTPC with an CETON PCIe card for cable TV that has far better video (probably because of the video scaling and other settings) than the cable companies tuner on my 50 inch plasma.

avatar

vrmlbasic

Didn't you just ask on the monitor thread why we can't think beyond gaming? :)

Compute performance doesn't equal general performance but I hope that it will. It might be in vain as multi-threaded software development is still in its infancy and AMD's Bulldozer architecture has never, at least on Windows, been reliably used as AMD designed it to be. But since independent developers can play with OpenCL w/o relying on Intel or M$ to facilitate it I believe that there's a greater hope with this.

I'm hoping that the era of "dual threaded" games, that ancient UE3 idea, will be put out to pasture by the AMD-based gaming consoles...someday lol.

...the worst part about the Qualcomm rivalry is that AMD sold them their mobile tech :(

avatar

LatiosXT

There's no magical component or design decision that will be a universal performance booster.

With regards to the general populace who use computers, most of the applications they run are event driven. They're waiting for something to happen. And when that something happens, the action taken is very short and the system returns to an idle state. Multi-threading isn't going to improve a word processor or an internet browser, they're both waiting on painfully slow I/O (one taking forever, the other taking a very long time in computer time).

Multithreaded systems only work well when there's a bunch of independent data that's known, along with the parameters and they won't change. There's not a lot of design scenarios were that happens in a typical consumer scenario. If you don't know what kind of data you're working on, how can you effectively determine if they're independent of each other or not? I believe this is related to the halting problem.

I believe games are not going to reach a scalable point because the data a game works on relies on each other, along with the fact that games are real-time (more or less), event driven software. How can the renderer start if it the physics routine hasn't finished modeling the next frame or the AI routine done simulating the next action? Sure, you could render ahead of time, but then I'm seeing obsolete data, which I don't want. And do you want the game resolving these things ahead of time, before it has a chance to deal with user input? Because then you introduce wasted computations, which is totally undesirable. Either way, performance boosting by multithreading requires highly predictable programs. Games are not predictable.

Also benchmarks I've found on the FX-8350 on Linux machines show similar results to that of Windows. So unless you want to blame the Linux Foundation for also nerfing AMD CPUs or not understanding them (and I believe there's a bunch of smart guys up there), there's just something fundamentally wrong.

And Anandtech pointed out in their GTX 750 Ti review that NVIDIA managed to achieve a better performance per watt ratio by increasing the number of control units. Their reasoning to NVIDIA's approach is that with more control units governing less SPs, it can fine tune what SP clusters are active and what aren't. Otherwise, you'd have scenarios where an SP cluster is on, but it's not being used to its fullest. This is effectively the opposite approach AMD has taken.

Also "designed for this many cores" doesn't mean how many threads of execution it has. I wrote software for a Cortex M3, so it should only have one thread of execution right? No. The software I was writing for it had over 8.

tl;dr, use cases determine that certain aspects of a CPU are more worthwhile than others. Most people don't use the compute use case.

avatar

vrmlbasic

Except that Multi-Threading does improve a word processor or an internet browser. Even when it doesn't provide performance increases per se it does prevent browser lag when the UI is wholly separate from any demanding or otherwise time consuming (eg:disk I/O) operations. With JavaScript being what it is (and now featuring bona fide multithreading-web workers-itself) it should never have been in the same thread as other aspects of the browser :(

It's not just the OS developers holding back Bulldozer with crappy scheduling but the compiler makers and other "low level" software developers who have shot the Bulldozer design in the foot with such lapses as not accommodating the FPU design.

"I believe games are not going to reach a scalable point because the data a game works on relies on each other, along with the fact that games are real-time (more or less), event driven software. How can the renderer start if it the physics routine hasn't finished modeling the next frame or the AI routine done simulating the next action?"
If games don't become scalable then the next 5-10 years of gaming is really going to suck for the console users ;)

To the question in short, you subdivide those routines. The era of the "two thread game", as brought about by UE3, is outmoded.

avatar

LatiosXT

Multithreading doesn't help an I/O bound task and any lag (which I'm going to assume is GUI hiccups) might just be because the OS thinks it's not important to schedule a GUI refresh, especially when the process has been bombarded with higher priority tasks. If you want a GUI that's responsive all the time, go to Apple, they also make sure it's a high priority task. In any case, there's one thing that makes multithreading very hard: resource management. If you can't figure out why that is, then oh well.

It's also very easy to blame someone else when hardware from a company you have a hard on for doesn't deliver as promised. The GeForce FX series could've been great, but nobody cared. Do I blame the developers or NVIDIA? (knowing you, you'd say NVIDIA, but you know, I could easily make a case for it being the developers fault too) The Cell Processor was loads better than the 360's CPU, and it took a bloody miracle to get anyone on board that. The thing with developers is if you have to make them work really hard to get something simple done, they're not going to care about the part. If AMD wanted their part to perform as intended, they should supply compilers. Oh wait, GCC has had Bulldozer architecture support since 4.7, roughly two years ago. If you want to make a case for AMD, go build stuff on GCC with the proper flags and get back to me.

Oh and Unreal Tournament 3 uses 36 threads ( https://i.imgur.com/qHeIDph.png ) running a map with 32 bots. Rainbow Six Raven Shield (the only UE2 game I have installed) used 12 or so. What was that about "two threaded games"?

AMD has made a bunch of promises and they fail to deliver completely or they only matter in certain circumstances.

Because again, there's no universal magical sauce that someone can apply to a processor architecture design and that improves performance on every task.

avatar

PCLinuxguy

That's one helluva guy. I can't wait to see his impact on AMD in the coming months.