CES 2014: Gaming at 12K with Nvidia! [Video]

37

Comments

+ Add a Comment
avatar

Hey.That_Dude

Pretty... But I really like that last comment from Gordon. Completely unnecessary, but funny none the less.

avatar

Avery

Remember your watching a "Maximum PC" video. Suck it consoles is completely justified and completely tongue in cheek!

avatar

vrmlbasic

Unnecessary? "What could BE more necessary?!" ;)

I think it's the microphone, my speakers or that the first time I saw this video was right after watching a lackluster ep of TBBT but I can't help but think that MPC's deputy editor here sounds kinda similar to Dr. Leonard Hofstadter lol.

avatar

yammerpickle2

I'm glad NVidia had the display. I've been saving up for a long time to buy the 4K upgrades. But I've still not seen what I'm looking for in a 4K monitor system.

avatar

PhaQue

Too bad the demo racing game is pure crap and has non existent three screen support. They should have used iRacing or Assetto Corsa.

avatar

ApathyCurve

Absolutely. iRacing's multi-monitor support is the best in the industry, hands down. It's the model every game developer should be emulating.

avatar

Baer

I had a chance to see and play with it at CES. Amazing!!!!

avatar

vrmlbasic

Why is ASUS putting such huge bezels on its gaming monitors?

The utter ridiculousness of 4 Titans to power this really makes me hope that AMD's claims about Mantle giving one card "the power of 10" are fulfilled (unlike its CPU architecture potential, which still goes unfulfilled c/o "WinTel" keeping them down) as one air-cooled R9 290 is a lot more feasible than 4 water-cooled Titans.

avatar

maleficarus™

One card the power of 10? LOL Please don't tell me you believe this crap?

avatar

vrmlbasic

I take it with a grain of salt.

...though compared to my current GPU setup, it isn't too far off the mark; I've learned that amount of VRAM is pretty darned important.

avatar

maleficarus™

The only thing that is important is resolution used and AA settings used. If you play 1080P or less 2GB is fine!

avatar

LatiosXT

Their CPU design, from released block diagrams, is flawed to begin with for single threaded performance. Multi-threaded performance if you take it at face value between block diagrams paint a different picture.

Also AMD's CES press conference said at least BF4 performed 45% faster with Mantle than without. So assuming the Titan and R9 290X are the same, you still need 3 cards minimum to achieve the same performance.

avatar

vrmlbasic

We still don't have all that much software which recognizes the different sized FP processors or some of AMD's nifty widgets like FMA4. Flawed though it may be, it doesn't help that Intel's compilers don't clue software in on AMD's quirks or that Windows' process scheduler still doesn't put related threads on the same module, as was intended, or restrict heavily single-threaded processes to their own module :(

I'm pretty sure that Windows had bent over backwards to try and cater to the failed Pentium 4 architecture by this point after its release and that it had added in "awareness" for hyperthreading.

I don't think that Battlefield 4 is the game to be evaluating Mantle on, since it will be added in retroactively. The game doesn't seem all that optimized anyways.

avatar

LatiosXT

AMD released a hotfix to address Windows' scheduling issue and it barely improved a thing. The fact is a module contains two integer units and a thread of mostly integer operations can only be on one integer unit. Besides, even under Linux, the results are fairly consistent with Windows based benchmarks (sucks at single threading, works wonders when multi-threading). If you want to blame Intel based compilers, then AMD should make their own. They have, practically, full access to the x86 architecture.

Funny you also mention the Pentium 4, because Bulldozer and on are using the same exact pitfalls that made that processor suck (a deeply pipelined architecture that needs be clocked faster to make up for poor throuhput).

And are we just nitpicking here about BF4's performance on Mantle? Because I have no reason to believe both DX and OGL have 90% overhead cost. Take what's available, not some marketing fluff. If AMD really said Mantle will make one card have the power of ten, then they have to prove it in the real world.

And yes, I want a real world program. AMD can create anything they want internally for marketing purposes. It's like why nobody really took NVIDIA's FCAT seriously because how do we know it's not gimping AMD cards?

avatar

vrmlbasic

AMD did indeed promise 10 times the draw calls, etc could be made with mantle. I do believe that BF4 is going to be horridly optimized and not a real "exemplar" for Mantle as, currently, the game does not even, to quote the DICE developer interview @ Tom's, take full advantage of the consoles' hardware and as such doesn't run in Full HD on them. Since it still isn't optimized for those, its "bread and butter" platform, then I doubt it'll be truly optimized for the comparatively "niche" Mantle market.

That hotfix was a kludge and did nothing to actually get Windows to properly schedule processes and threads for Bulldozer/Piledriver. Microsoft would have to do that and so far, they haven't shown any signs of wanting to "play ball". I'm not ready to agree that the P4 and Bulldozer wandered into the same pit but it's a fact that Microsoft did all it could to pull P4 out of its pit while it is doing precious little to design its software to work with AMD's hardware.

avatar

LatiosXT

AMD made a lot of promises. It only delivered on very few. Also, draw calls are a CPU thing, and offering 10 times better performance may not even increase the GPU's apparent performance in the same way. Taking from this post: https://stackoverflow.com/questions/4853856/why-are-draw-calls-expensive

"The main reason to make fewer draw calls is that graphics hardware can transform and render triangles much faster than you can submit them. If you submit few triangles with each call, you will be completely bound by the CPU and the GPU will be mostly idle. The CPU won't be able to feed the GPU fast enough"

And if you're making enough draw calls that a sufficiently powerful CPU can feed the GPU 100%, having the capability to do more doesn't mean anything performance wise. All this really does is help the lower-end CPU spectrum that can't keep up with its more powerful brethren. In this case with the PS4/Xbox One, it sorely needs it since the Jaguar CPU is a very weak CPU compared to say an FX-8530.

... And AMD is playing Pentium 4 with it's architecture. How do they make their products more competitive? Release faster clocked parts. In four generations of Core, Intel's only bumped the fastest part available by 0.4GHz (and if you exclude the earliest generation, 0.1GHz). Compared to AMD in two generations had to jump from 3.5GHz on Phenom II to 4.7GHz (5.0GHz turbo) on the fastest available FX processor.

EDIT: Actually the point of draw calls makes Mantle a moot point about 12K resolution rendering. Resolution is a fillrate issue, not a draw call one.

avatar

vrmlbasic

AMD is making its CPUs run faster to make up performance, just like it is taking its GPUs to ridiculous temperatures to obtain performance. I don't contest that.

I do think that it takes more than just jacking up the clock speed to count as "playing Pentium 4" as the Pentium 4's architecture was flawed beyond any software fix whereas it seems that Piledriver's architecture isn't flawed beyond a software fix, we just haven't gotten that software fix yet. Personally I don't mid that AMD has to hurtle along at a break-neck 4 Ghz while Intel can achieve similar performance in the price range by puttering along at a much slower speed. What we learned from the Pentium 4 era is that clock speed isn't worth much as a performance measurement between unlike chips.

avatar

AFDozerman

I don't know what the deal was, but those patches mentioned earlier were gimped.

http://techreport.com/review/21865/a-quick-look-at-bulldozer-thread-scheduling/2

Manual thread scheduling can give upwards of 30% over an unmatched windows7 and 10-20% over a poorly optimized manual thread scheduling:

"These results couldn't be much more definitive. In every case but one, distributing the threads one per module, and thus avoiding sharing, produces roughly 10-20% higher performance than packing the threads together on two modules."

I have reproduced these results on my own computer with a motherboard that allows me to disable/enable cores and modules arbitrarily.

avatar

vrmlbasic

If only Microsoft would get with the program. That article pins a lot of hope on Windows 8 fixing things but we're up to 8.1 now and I haven't heard of Microsoft finally moving to fix their scheduler further.

I'm not sure that the article's idea of setting up Bulldozer/Piledriver to piggyback on Hyperthreading support is the best one as it wouldn't give the OS the awareness to schedule 2 threads that use the same resources on the same module. I'm also not sure that it would give the OS the awareness to give intensive, especially FLOP-intensive, threads an entire module.

Do every 2 consecutive cores in Windows' nomenclature even represent a module; is Module 1 comprised of what windows calls CPU 0 & CPU 1?

avatar

AFDozerman

All I know is that the new resource manager calls a single module one core. Does that mean that it is properly scheduling threads? No. We have no way to be sure without some proper testing.

avatar

AFDozerman

Well, anything that helps...

Now we just need to convince the world to manually schedule their threads and we'll see bulldozer matching sandy bridge and piledriver matching ivy!!!

Too bad no one knows this is even possible. It could really she'd some light on the ignorance that people have about the whole Bulldozer line.

avatar

RUSENSITIVESWEETNESS

Remember when a 1024x768 flat screen LCD was over a grand? And that was cheap?

avatar

AFDozerman

How does this compare to Microsoft/AMD's 12K setup they did that time with a single 7970? What were the graphics settings and how much more resource intensive is Project Cars than Dirt 3?

Also, "suck it consoles"; best video ending comment ever.

avatar

vrmlbasic

I agree, "Suck it consoles!" was hilarious.

Also, watching this after slogging through an episode of The Big Bang Theory (remember when that show was funny?) was kinda weird.

avatar

AFDozerman

Yeah, that is one of those shows like Weeds that just should have been ended while it was still good but didn't.

Also, dear MPC: can we get some email alerts for comment responses? It gets annoying to keep checking.

avatar

squarebab

Gordon is the lone star of Maximum PC. Without his personality bashing its way through all the hype of the tech world, I don't MaxPC and its podcast would be all that interesting.

"Suck it, consoles!"

avatar

John Pombrio

Crreerrriipppees! 4 water cooled Titans just to run this bad boy. MPC seriously needs to start evaluating the feasibility of adding a 4K monitor to our rigs.

avatar

Ghost XFX

..And the PC Master Race continues it's reign.

avatar

Obsidian

And THIS is the reason the Oculus Rift is such a big leap forward.

Even if the Oculus Rift ends up being $500 or $600 it's a fraction of this price and the user should only need one graphics card for an arguably better experience, due to the 3D enhancements offered by stereoscopic viewing.

Each titan is still selling at $1000, and this experience is brought to gamers with 4 of them. You'll need at least 1000 watt power supply to get those to work, probably a 1200 watt unit to be safe.

We're talking over $7000 in hardware to get this experience, including the monitors.

Impressive, but, like you wrote, the price hurts.

Thanks for the coverage!

avatar

John Pombrio

the Rift is a terrific idea and eventually its time will come. But to get a device that can be worn for hours at a time is a serious leap in technology and frankly, I still don't see it happening anytime soon. GREAT for demos and short playtime but doing a raid in WoW? Don't think so, not yet anyways.

avatar

USraging

If only i had 4k monitor to watch this video on.

avatar

Mark17

The video's only 1080p though. These guys are slackin'. C'mon guys, where's the 12k direct-feed video?

avatar

KCooper

I found this on Wiki about Petabyte the other day >

The 2009 movie Avatar is reported to have taken over 1 petabyte of local storage at Weta Digital for the rendering of the 3D CGI effects.

***

kc

avatar

Nixon

I still really want to play Project Cars. All I need is a steering wheel. :(

avatar

TheITGuy

Same here! I have one and follow the group on Steam but haven't been able to do any of the testing. Looks great though

avatar

LatiosXT

And to think that kind of fillrate was just being tapped 10 years ago on a GeForce FX 5900.

avatar

AFDozerman

What do you mean by that fillrate?