AMD: Nvidia is ‘Bitter’ Over PS4 Snub

83

Comments

+ Add a Comment
avatar

Laptop Adapters...

Many boys like playing games, and this one will help them to enjoy the games better!
Thanks!

avatar

JCCIII

I was happy to hear this news. We need a strong AMD for a healthy marketplace. Nvidia’s success, especially with the troubles AMD is having with “Frame Rating,” could deflate potential customer interest and keep the company’s faithful away. Furthermore, the ATI holder (Advanced Micro Devices, Inc.) has already had its share of difficult years, and I have been waiting years to invest in a shiny new product from AMD.

Because of the evidence that Nvidia’s has been using shrewd marketing strategies, the competition needs to balance. However, with obvious enthusiasm, I still celebrate Nvidia’s achievements. I want their success to increase, but I also want AMD to get it together in a major way. Companies are far less likely to take advantage of consumers when consumers have a reasonable choice between competitors. So, congratulations to AMD!

Frame Rating: //youtu.be/CsHuPxX8ZzQ

Joseph C. Carbone III; 6 April 2013

avatar

weap0nkil

I have no idea why people game on a Console, its like a Super Nintendo compared to PC gaming.

avatar

JCCIII

There are compelling reasons to play console games, and there are good reasons to play on consoles.

Consoles are limited, and their developers definitely make it hard on the gaming community when they fight for supremacy with big dollars. Their struggles being blinded by self-indulgence, they placed the Grim Reaper to destroy PC gaming—wanting to capture, not the spirit of fun and friendliness, but money.

With Sony’s move to a developer friendly version of the PS4, like the PS Vita, the awesomeness of the PS Vita and its indie trend and as a literal PS4 extension—like Nvidia’s Project SHEILD to the PC, PC enthusiasts can embrace the neatness.

Try Super Stardust Delta on the PS Vita, then sit next to my last rig with its out-of-control GTX 480s, and determine which one you would be likely to play. The differences: mind consuming distractions, inherent performance problems, and relentlessly uncomfortable, or quiet, peaceful, and conveniently portable.

My latest endeavor is three Titans, I hope to finally experience PC gaming the way I have always envisioned it, but that’s my game theory.

Sincerely,
Joseph C. Carbone III; 7 April 2013

avatar

limitbreaker

Really? Because I love the super Nintendo!

avatar

JCCIII

Nintendo 3DS XL is a fun machine, especially with kids.

avatar

roleki

Maybe someone here has the answer - would the shared memory architecture in the PS4 prove a significant barrier to porting games to/from x86 code?

avatar

Hey.That_Dude

Not unless they implement a custom architecture for the console. That's not very likely. What is more likely is that it will run slower on current APU/integrated solution. That's just the way of things. However, as we move to DDR4 and approach the 3800MHz barrier, those problems will go away.

avatar

John Pombrio

Oh, and AMD made the whole deal bitterly cheap, hoping to make money in the second generation of the new console with a cheaper, more efficient version of the same chip. Good luck with that, AMD, your track record at "deals" has not exactly been stellar.

avatar

hypersonic

AMD could have been a little greedy and drop their price a hair under NVidia's offer rather than way, way under for god sake ! And speaking of god, coz he knows how bad they need it !

avatar

vrmlbasic

Too true: GlobalFoundries and, if memory serves, selling Snapdragon to Qualcomm.

BTW, if the PS4 and the Xbox 720 both use all AMD tech what truly differentiates them?

avatar

Peanut Fox

I didn't know Snapdragon came from AMD. I know after they bought ATI they sold off what became Adreno.

Man have they got to be kicking themselves.

avatar

limitbreaker

That's what im wondering, sony obviously knows that AMD is working with microsoft and will keep itself an edge to differentiate, i'm just hoping that being the first one out isn't the only edge that they're planning on. There probably is a backroom deal to make sure that the competitor isn't getting superior gear. Personally I'm more comfortable with the Playstation after seeing how microsoft nickel and dimed everyone with their Xbox all while cutting corners and giving defective products to their loyal customers.

avatar

AFDozerman

AMD IS BUTTHURT, NVIDIA IS BUTTHURT, HALF THE READERS ON THIS ARTICLE ARE BUTTHURT- WHY IS EVERYONE SO GODDADMN MAD???????????????

avatar

Peanut Fox

Because when you plunk down $99-$500 for a graphics card you need to justify that you picked the right team, and anyone who didn't pick the same team as you has clearly made the wrong choice.

avatar

Hey.That_Dude

THE POINT EVERYONE IS MISSING IS THAT THIS IS X86!!!!!!!!!
Translation... Emulators will (should) run NATIVELY! YAY!

avatar

The Mac

hmm....thats a very good point...

no more shitty ps emulators....

avatar

DarkMatter

~nVidia... Get over it! Seriously....

avatar

devin3627

8gb gddr5 = 4k resolution textures

avatar

vrmlbasic

Reading the article, does this mean that AMD has finally reached the APU "enlightenment" that they've been talking about for years, that the calculations that the GPU can rock (floating point) and the CPU comparatively fails at can now be done on the GPU, with the results sent back to the CPU, with no performance loss?

The article seems to imply that the PS4 setup allows its both the CPU and GPU to access the same memory locations, eliminating the redundancy of having to move data from one part of system RAM to another that is currently gimping Trinity.

If so, this could be pumping out floating point calculations faster than a desktop processor alone could. It would make perfect sense, given the unified system RAM zipping along at GDDR5 speeds:while the CPU's memory controller might not be able to rocket along at 5+ Ghz speeds, the GPU can and if it doesn't have to wait for data to be moved about in RAM before it can start working on it...

avatar

H1N1theI

Not exactly.

It's a horrible idea for more "powerful" setups.

As of now, GPU are made for vertex and floating point operations, but usually restricted to 16-32bit operations, while the CPU is churning out 128+ bit floats.

The problem with shared RAM is that your framebuffer is in the same location as your code heap and stack, and accessible from the regular code. One wrong pointer arithmetic, and you might see your screen turn purple, and not to mention crazy operations happening if you accidentally "segfault."

It's a novel idea, and it seems great for consoles, but it's a horrible idea to implement on desktops. We have separate memory for a reason.

Also, BTW, most rendering libraries load textures directly into GDDR, not RAM, but most of the operations are done with pointers to location inside the heap.

GDDR is also vastly different than DDR, which means it's not a good idea to compare the Hz of the two directly.

avatar

vrmlbasic

Pointers and such have been able to do what you say for decades now and we've worked with it. We've lived under the threat of the segfault for decades and we've more than survived: we've thrived (couldn't resist).

In theory, anything running on the CPU can access nigh on any part of RAM (only stopped by largely logical measures). The APU's GPU is "on" the same chip as the GPU so I don't see why it should have some artificial restriction on access for reasons of "safety" which have long been dealt with. AMD maintains that the CPU and GPU are one on the APU, and this equal-access must come with that, as AMD claimed was their goal (I'll try and source that in the interim).

The GPU may be designed for <=64 bit FP calcs but that doesn't mean that it can't perform 128 bit calcs if it came to that. We've been doing calcs that exceed the "convenient" size of the calculating chip for eons now.

Can a rendering library put textures directly into the GPU's memory? How do the textures get from the disk to the GPU without several stops in the main RAM?

avatar

H1N1theI

Perhaps I'm old style, and I prefer to have my framebuffer and textures located AWAY from RAM, but you're saying loosen the divide between graphical memory and regular memory. I'm fine with that on a console, which actually make sense (a lot of iGPUs actually do the same thing, but have a dedicated area assigned from boot.) But really, a GPU doesn't *need* access to the regular RAM, and a CPU doesn't really *need* access to GPU RAM (Direct access, I mean, not though abstracted stuff.)

Speaking of APUs, I frown upon the idea. Mostly because you're packing a float co-processor instead of a larger, beefier video card. But in this case, it's fine, because it would be roughly the same performance anyways. (Small on-die GPU vs small off-die GPU, your pick.)

And RAM isn't everything. So what if you manage to finally get a 1080p framebuffer with 65K textures if you're at 12 FPS and can't even run a proper beamtrace.

avatar

LatiosXT

Someone's butt hurt they didn't win a contract in the console wars. Someone else is butt hurt because they're losing in their primary market.

avatar

beta212

http://www.geekosystem.com/wp-content/uploads/2012/06/v40g6.gif

avatar

AFDozerman

Soooo...does this mean AMD's linux drivers are going to improve?

avatar

Budman_NC

I have to laugh every time time I see that. :-)
Priceless
+1

avatar

gothliciouz

couldn't care less about specs, i wanna see results!.

avatar

raymondcarver

Exactly... Until then, it's the whiz-bang console in the sky.

avatar

DR_JDUBZ

amd's apu's have never been able to amount to a real desktop cpu, nvidia shouldnt be bitter. nvidia should stick to desktops. 8gddr5 is useless even on most desktops usages, and does not equate to good graphics necesarrilly, just higher resolutions, and better textures in some cases

avatar

fung0

Sony's choice of AMD is completely logical, but not really about performance.

While it has tended to lag Intel and Nvidia on raw horsepower (by a fairly small margin), AMD - and especially the Radeon technology it acquired from ATI - have excelled at minimizing power consumption. Check the specs on recent graphics cards - AMD cards are dramatically more efficient than Nvidia's.

On a PC, this isn't such a big deal. Serious gamers just want maximum throughput, and are happy to shell out for huge power supplies and advanced cooling. So Nvidia tends to rule. But in a compact console, heat is a major concern, producing lower reliability and necessitating noisy cooling solutions. Advantage: AMD.

avatar

raymondcarver

I'm no-longer willing to shell out for an extra, giant power supply. ....

avatar

captainjack

Threads like this one are the reason why I read the comments at MPC. BAM! Knowledge! KA-POW! Facts! It's awesome to read, and i'm just over here taking notes

avatar

AFDozerman

As far as not amounting to a desktop CPU, APUs were never designed to have the same workloads, you're really comparing apples to orangutans here. The A in APU stands for accelerated. This is a reference to the fact that they were really designed for GPGPU workloads and in that respect, APUs can destroy even a high end desktop with discrete graphics in a lot of cases.

When running the CPU and GPU in tandem, producing numbers approaching a teraflops, the botttleneck becomes getting data to the GPU. Actually, one of AMD's biggest performance issues is that modern RAM can't move fast enough to feed the unit as a whole; when the CPU and GPU get whined up all the way, even GDDR5 is going to have trouble keeping up. As far as the amount of RAM, I see 8 gigs getting used up pretty quickly. Just wait as the player plays through one level, loading the next level in the background, and when level X is done, it's an immediate transition to level Y. On top of that, in three or four years, games will have progressed to the point that they will use that much ram. This isn't yesterday's games we're shooting for, here.

avatar

AFDozerman

As far as not amounting to a desktop CPU, APUs were never designed to have the same workloads, you're really comparing apples to orangutans here. The A in APU stands for accelerated. This is a reference to the fact that they were really designed for GPGPU workloads and in that respect, APUs can destroy even a high end desktop with discrete graphics in a lot of cases.

When running the CPU and GPU in tandem, producing numbers approaching a teraflops, the botttleneck becomes getting data to the GPU. Actually, one of AMD's biggest performance issues is that modern RAM can't move fast enough to feed the unit as a whole; when the CPU and GPU get whined up all the way, even GDDR5 is going to have trouble keeping up. As far as the amount of RAM, I see 8 gigs getting used up pretty quickly. Just wait as the player plays through one level, loading the next level in the background, and when level X is done, it's an immediate transition to level Y. On top of that, in three or four years, games will have progressed to the point that they will use that much ram. This isn't yesterday's games we're shooting for, here.

avatar

AFDozerman

As far as not amounting to a desktop CPU, APUs were never designed to have the same workloads, you're really comparing apples to orangutans here. The A in APU stands for accelerated. This is a reference to the fact that they were really designed for GPGPU workloads and in that respect, APUs can destroy even a high end desktop with discrete graphics in a lot of cases.

When running the CPU and GPU in tandem, producing numbers approaching a teraflops, the botttleneck becomes getting data to the GPU. Actually, one of AMD's biggest performance issues is that modern RAM can't move fast enough to feed the unit as a whole; when the CPU and GPU get whined up all the way, even GDDR5 is going to have trouble keeping up. As far as the amount of RAM, I see 8 gigs getting used up pretty quickly. Just wait as the player plays through one level, loading the next level in the background, and when level X is done, it's an immediate transition to level Y. On top of that, in three or four years, games will have progressed to the point that they will use that much ram. This isn't yesterday's games we're shooting for, here.

avatar

kiaghi7

Even beyond that, the GDDR5 memory they use does indeed have very good bandwidth, but at the present, the RAM and GRAM are not the bottle-neck, which is why DDR3 is still perfectly viable. Storage and especially optical media have long since been the slow-down. More RAM and/or faster RAM won't make the load time from the ODD/HDD any better, even the most mediocre RAM can readily run circles around a read.

What they also leave out is that modern gaming computers can use quad-channel DDR3, and even "last gen" computers (CPU/MOBO) are sporting triple-channel DDR3.

The CPU is a laptop "leftover" right off the shelf with a "NEW AND IMPROVED" sticker slapped on a Bobcat architecture.

So AMD acting like they are doing something "revolutionary" is disingenuous, and while he should have just remained quiet, the Nvidia exec was completely right. The specs of the PS4 aren't even really up to par for LAST generation PC's (CPU/GPU). It's clearly designed with off-the-shelf parts and the "custom" part of the Jaguar is the renaming of an antiquated architecture, and the entire reason for all of it is to avoid the PS3 debacle where it cost more to make than it sold for (even when it cost $600+ at launch!), so Sony was losing their butts on consoles for years.

Remember before it launched, when Sony was touting how the PS3 was more powerful than PC's (derived from the GeForce 7800), the GeForce 8800-series came out many months before the PS3, and was more powerful in every measure...

As per my original assessment at the announcement of the PS4, it's a lobotomized LAST GENERATION (at best) PC. It uses a repackaged Radeon 7850 or 7870 GPU, which isn't bad, but it's not even close to top of the line even when it came out. It's a mid-level card from the word go...

At the end of the day, the Nvidia execs just need to be quiet and let reality play itself out. At the same time, AMD execs need to slow down with the self-gratification, they can go blind like that...

avatar

limitbreaker

.

avatar

kiaghi7

Awww it's so CUTE that you would come back at a later date and DELETE your post to try and hide the fact that you've been wrong THE ENTIRE TIME...

avatar

The Mac

that one was always blank. I noticed it before.

avatar

HVDynamo

It would appear that you don't know much about the hierarchy of memory in a computer either. DDR3 isn't sufficient for a good graphics card, and if you don't believe me check out some reviews of cheaper cards and compare the DDR3 ones to the GDDR5 versions of the same card. Processors have onboard cache that runs at full processor speed, but when what it needs isn't in there it has to go to main memory, which is much slower and makes the processor wait. Much the same as when what is needed isn't in RAM it has to go to the even slower hard disk. Faster Memory can make a huge difference, and the hard disc slowdown really only effects load times. Beyond that everything is loaded into RAM for access. So faster RAM acces means the processor spends less time waiting for something, and in graphics processing that has a huge benefit. As far as the sharing of the memory, I think that can have a good benefit if its taken advantage of. Also, an advantage of consoles is not having to program for different architectures and performance levels, with one hardware design that everything runs on it makes it possible to really optimize your code to run on that system and get more out of it than you would on a comparable PC. That said, I am still predominately a PC gamer and don't have much interest in buying one of these, but I don't think they made a bad decision.

avatar

kiaghi7

Excuse me? Which is precisely why to this very day ATi is using DDR3 in some of its graphics cards and was in fact designed by ATi? Even the forthcoming 8400 and 8570 will have DDR3 variants... Yeah, sit down child...

http://en.wikipedia.org/wiki/GDDR3
http://en.wikipedia.org/wiki/Comparison_of_AMD_graphics_processing_units

Indeed RAM is not as fast as the processor, which noone ever suggested otherwise so why you said it makes one wonder if you're aware that water is wet... You may not yet know, but that's the entire point of having cache, so that the CPU isn't left waiting for slower components, but you pretending to know what you're talking about is exposed when you say "and the hard disc slowdown really only [affects] load times."

You don't seem to understand that the entire game isn't in RAM, and as such plenty of less immediately pertinent data is being shuffled back and forth to storage, and that is the largest bottleneck in a computer to this day. Google "virtual memory"... I don't have the time to teach you CPT257, so today I'll just teach a man to fish for himself...

Let's look at the utterly USELESS "statistic" ([8GB of GDDR5, 176GB/s raw memory bandwidth]) of how much data bandwidth means less than nothing since it in no way demonstrates what it can actually DO with that data in that interval.

The Geforce GTX680 at factory spec can move 192.2GB/s by itself... That's not even touching on the rest of the computer mind you, and that's not even the very bleeding edge of current technology. The Titan has a 50% wider BUS and 288.4GB/s memory bandwidth...

Now I'll let YOU decide, regardless of the memory bandwidth, which card is more capable... A FACTORY SPEC GTX680, or a Radeon 7850-7870... You can go ahead and give that Radeon 24GB of GDDR5 RAM clocked so hot it glows cherry red... And it will still be markedly inferior in every significant measure to numerous cards of TODAY, much less of what comes out in the time before the PS4 actually sees the light of day.

Both ATi and Nvidia are readying their new lines... Yet again, it's nothing more than the same situation with the claims of the PS3 before its launch, it was already a glorified 7800GT, and then the 8800 came along and utterly eclipsed it before the PS3 even got boxed up for shipping.

avatar

AFDozerman

You pretty much confirmed that you are wrong. From what I saw from your sources, most of the graphics parts that were using DDR3 were integrated OEM parts that are built into the motherboard, which are severely gimped due to memory constraints. The ones that could possibly be cards were extremely low end parts that are.... You guessed it, gimped due to memory constraints.

avatar

kiaghi7

Amazing how well you counter the argument by not actually doing anything...

In other words, you're a troll that was wrong from word one, sit down child...

"most of the graphics parts that were using DDR3 were integrated OEM parts"

Except the ones I specifically referenced, which was the point, and you merely re-iterated that you don't know what you're talking about by trying to re-troll.

avatar

cpuking2010

"Excuse me!" Now watch me reference Wikipedia!

avatar

beta212

Yup, ram is pretty much the bottleneck for graphics now, kiaghi7 seems to have been living under a rock for the past few years.

avatar

kiaghi7

Nice of you to demonstrate your claim, what with me supporting my point and all, your ignorant "one liner" really shows how much you know!

That being barely enough to fill a single sentence...

avatar

Hey.That_Dude

Kaighi7
No one is going to take your claim seriously if all you have to back it up with is Wikipedia. What you do is quote the people Wikipedia quotes.

AS for ODD/HDD being a bottle neck, yes they are at load. Which is why RAM is used. Ram holds what is important for the CPU since having that much on level cache would make chips cost in the tens of thousands of dollars. After load, they aren't bottle necks. With more Ram then there is less bottle necks as you can load from ODD/HDD while you start to run out of buffer instructions/info in ram. (then cycle out old info in ram for new info from disk... it's quite smart)

Sources: Me.
Now go learn some programing. Try C. It's a good language to start with and gives you some appreciation for just how file I/O works.

avatar

kiaghi7

Child, I pointed to wikipedia, because they are actually referenced and have a bibliography at the bottom to verify with straight from ATi's own documentation.

But I expect nothing less from an ignorant troll to say "WIKIPEDIA MEENZ UR WRONG!!"...

Meanwhile, you've not demonstrated anything in any way that counters what I've said... In fact your first semester CPT234 knowledge of C++ is likely what's motivating you to pretend like you know what you're talking about... And yet you haven't actually demonstrated how I'm wrong in any way...

How about this, rather than actually ADMITTING I AM RIGHT in your opening lines, only to pretend like you're refuting what I said, why not just remain silent and/or troll elsewhere?

avatar

Hey.That_Dude

Little one, and I mean LITTLE, I have my credentials from people who make the CPUs that you use. Attacking me personally is difficult, especially considering that you don't even know who I am. If you look at my Info, I've told you the areas in which I specialize. Then, if you thought about it, you would have realized that I've learned MUCH more than C++.
Now on to the actual point of the post:
You're not out right wrong; you're just not saying it well. That was my point the first time. You obviously missed it. I tried to tell you when I started that comment that you weren't using your sources correctly. Then I told you that your bottle neck is almost solved except for on initial game load.
Not wrong, just not expressed well. Unfortunately, you got angry and pissy and had a little fit.
THAT is why you lose. Even if you win, you still lose. Now chill out.

Log in to MaximumPC directly or log in using Facebook

Forgot your username or password?
Click here for help.

Login with Facebook
Log in using Facebook to share comments and articles easily with your Facebook feed.