Ivy Bridge-E Review: Core i7-4960X

44

Comments

+ Add a Comment
avatar

varnium

I wonder if there's a list of the best processor in the world which would help readers decide on the ones they want to buy. Perhaps some kind of list like in http://www.worldrankingsystem.com ?

avatar

smarties

I use http://www.passmark.com/index.html which has lists that compares the performance of CPU's, GPU's, RAM, etc according to a benchmark product. It is very handy to distinguish between products, attaining information of products which deliver the highest performance and it also provides lists for the best value per benchmarking performance of products.

avatar

Wadres

I am currently using my rig only for gaming, I'm planning to upgrade from an i7 4770k (motherboard MSI Z87 GD65 gaming) to an i7 4690x, The whole reason for this upgrade is because I recently bought a couple of TITAN BLACK for dual SLI configuration... I thought the 4770k will bottleneck the Titan Blacks but now I see this review i don't know what to do...

Should I or should I not upgrade to Sandy-E? Will the 4770k @ 4.2 Ghz be enough for Dual SLI of Titan black? I use a lot of antialiasing and I play on max settings, please help!

avatar

andru

one thing that i didn't see mentioned was if the native pcie 3 is any advantage over the on board pcie 3 that we saw on almost every 2011 board

avatar

tsoucerso

Don’t think Ivy Bridge-E is some left over Ivy Bridge part. The new chip obviously doesn’t include integrated graphics and is a native hexa-core part. The original Sandy Bridge E hexa-core was a native octo-core with two of the cores switched off.

protein diät

avatar

Baer

OK, so the announcement of the new Rampage IV Black edition latest update and upgrade for the new 4900 series and the chip itself makes it a no brainer. Unless money is the issue this is the way to go for a maxed out rig. Yes, some of the plusses are not huge but they are there and BIOS updates Etc. will continue to make it better for those that want the best of the best.

avatar

Ninjawithagun

Definitely a no brainer NOT TO UPGRADE if you already own an X79 mobo and CPU. Upgrading to the 4930K or 4960K and buying an antiquated socket motherboard only makes sense if one is upgrading from the Kentsfield or Wolfdale CPU family (pre-Sandy Bridge era).

For those who have a Sandy Bridge or higher CPU and mobo setup, stick with what you have until Broadwell is released in 2015.

avatar

Lipman42

So based on some of the comments above I can still wait to upgrade my i7 990X other than getting better PCI bandwidth for Xfire 2-3 card setups with the new tech on MB etc?

avatar

Ninjawithagun

NO! Don't upgrade. It's a waste of money. You will gain very little for the $1500 you are thinking about spending. I was in your shoes about a year and a half ago and upgraded from a 980X setup to my current 3930K setup and can tell you now it was a waste of money and just a terrible investment mistake. I should have just waited for Broadwell to be released in 2015. Every upgrade option there is now won't give you the boost in performance you are looking for. At best, you get an incremental increase that will leave you disappointed.

avatar

hornfire3

typo found:

"For the LGA2011 testing we used an Asus Sabertooth X79 board. For LGA1155 we used an Asus P8Z77 Premium board and for LGA1155, we used an Asus Z87 Deluxe."

the z87 uses LGA1150, not 1155. fix that plz.

avatar

gordonung

Corrected. 

avatar

ANGER999

I have owned a x79 sabertooth with a 3820 ( running at x125 boot strap) for about two years now all on air... I was able to acquire 32gb ( 8 x 4gb) of kingston hyper x double tall ram for under $130 at the time ( after rebates) ... I am totally happy with my purchase, but I am confused with this build going form medium to non existent then reappearing as a performance build in august... is this review a explanation for this happening???

and... lastly... Gordon is AWESOME.

avatar

AFDozerman

I see someone was up late at night; lots of typos!

All kidding aside, great review, MPC. Thanks for all of the hard work.

avatar

gordonung

Yeah, apologies. Just bad timing on a lot of stuff so 1 a.m. makes thing rougher.

avatar

AFDozerman

It's cool. We've all been there

avatar

Jacker

Ya, tell me about it! My Mom always makes me shave her Legs late at night.

avatar

AFDozerman

Lolwut?

avatar

AFDozerman

Lolwut?

avatar

AFDozerman

It's cool. We've all been there

avatar

iplayoneasy

I guess you have to ask, when you bought that 3930k did you say to yourself, "Wow this is slow I cant wait until Ivy-Bridge E comes out." I'm just gonna rock my 3930k for a while. I can only see the OCD of us upgrading.

avatar

John Pombrio

Thanks Gordo! I was considering the Ivy Bridge E for a long time buy you made it clear for my workload that Haswell is the way to go. I cannot ask more from a review.

avatar

gordonung

Yeah, I have to say, you gotta really need those extra cores to make it pay off. The penalties of old chipset and the price to get to 6-cores is going to be hard for a lot of folks. The per-core overclocking will take much of the sting out of the utilities mature for it though. 

avatar

nbrowser

hrm there's an error in the processor chart...the 990X didn't do dual channel DDR3...oh no, it did Triple Channel DDR3...

avatar

gordonung

correcting it. 

avatar

limitbreaker

Intel is really resting on its hands sadly... They really needed to push atleast 8 cores with this update but this isn't even a big enough jump from a i7 980x let alone a 3960x. If only amd could compete at that level...

avatar

Ghost XFX

Woooo! Smell that bacon, Intel fans! Roll around in all that bacon and cheese! Intel loves it's fans! Nothing wrong with rolling in the bacon!

...Unless you have a dog.

No look, AMD can't even make do with more than 6 cores. Intel is doing alright here. Until programmers take advantage of 4-6 core CPUs, there's really no need to ask for more right now.

Before AMD came out with their Bulldozer architecture, my hope and wish was that they would continue to focus on 4 core performance. But AMD wanted to be first to 8 cores. hell, at one point, they were talking 12 cores! Come to find out, they can't even do 6 cores very well. So once again, I'm hoping that their next go around, they focus on 4 core performance only. By doing that, they may just shock themselves in realizing the goal of taking back the market.

Intel has done just that, and they've only improved over time. They have the war chest, they can only continue to do what they're doing right now, because AMD can't compete on that level, and programmers don't seem to have the fortitude to push them all the same. Everything is single threaded performance and everything above 4 cores is moreless a novelty right now.

avatar

Ninjawithagun

ZZZzzzz...snore fest...that's all we need is software developers geeking out...ugh...JUST KIDDING! I actually work with a whole team of developers and 'sadly' understand most of what you all are discussing :P

avatar

H1N1theI

To be honest, there's no real good way to get into concurrency as a developer, multithreading a piece of software is hell, I mean...

|-| 3 |_ |_. Once you get into multithreading, you get horrible nightmares about race conditions, how to evenly distribute thread loads, scalability, dead locks, live locks, hell, even atomic operations screw over multithreading by serializing the bloody thing. We don't exactly have a very good paradigm for concurrency, monads are arcane, Intel's IA64 dependency bit trick is trivial, most regular languages will always screw over concurrency.

Also, we run into the bootstrap paradox, in order to get more threads, software must become more threaded, and in order to become more threaded, software devs need to have a demographic with more cores. Unless you're using some exotic/scalable threading design like a thread pool or manually spawning and joining your threads (more work than it's worth), we won't get scalable software.

Anyways, there's a variety of hurdles to 8 cores, not just lazy programmers and bad corporate strategies.

Also, speaking of which, AMD chips apparently perform extremely well in database access and things like that, huh.

avatar

QuantumCD

Also, don't forget that every task can't be split into 8 threads. The reason I got an AMD FX in one of my PCs is that, they do just as good--if not better--when you are driving all 8 cores at max, compared to a similarly priced i3 or even an i5. A great example is CPU rendering. Your CPU speed is less relevant, and the task can easily be scaled up to 90+ threads (think multiple CPUs).

Oh, and thread pools aren't _that_ much more work, depending on your language/library :) I think the problem with big software and games is that the game engines they invested millions in 3-4 years ago were targeting quad-cores as they extremely high-end. 8 core processors simply weren't in the mainstream. I believe the new Frostbite 3 engine can support up to 8 threads for different tasks (physics, sound, networking, etc.), however. I'm sure other game engines are on the way.

I think the real problem is that, like I said, not everything can be efficiently split. As for the nightmares they inflict on developers... I've been there too :(

avatar

H1N1theI

Implementation is one thing, but design?

How does one balance the load upon a threadpool?

One blocking tasks, and then suddenly, you loose a thread from the threadpool for a few milliseconds, and then tasks can get starved, under-executed, the threadpool may empty, etc.

How does one design a good threadpool? :U (I've never had much experience in concurrency)

avatar

AFDozerman

Just out of curiosity, how helpful are APIs like OpenMP and OCL for multithreading? It doesn't sound like they would be much help when your threads are doing vastly different types of work, but could make computing large datasets with similar blocks of data much easier.

avatar

QuantumCD

OpenMP is pretty useful for threading--I have played around with it in the past. OpenCL is more for massively parallel programming and executing code on the GPU with computation kernels (similar to CUDA), which is referred to as GPGPU. OpenCL/CUDA are for small tasks that can be massively parallel, like rendering. A lot of rendering/encoding can be done with CUDA now, simply because the task can make use of the GPU's computational ability. More complex operations have to take place on the CPU though, so that's why we still have CPU's instead of all GPU's.

A lot of threading I personally use is when writing code that is near-completely separate. Say you are going to parse a giant file. Most higher-level threading API's simply let you "fire and forget" a method in a different thread using the OS's global thread pool. A lot of GUI applications use threading in this way to prevent the UI from "locking" when doing really large operations. Game engines commonly do rendering in another thread as well (AFAIK).

I think the point is that, most things can't be broken down into 8 different tasks. 2, even 4 is more reasonable. The exceptions, like aforementioned, are tasks such as rendering. Specifically raytracing, where you can raytrace each pixel on a completely different thread. Raytracing can scale up to massive amounts of threads, as each operation could take minutes on its own, depending on the number of bounces. In a 4K image for a movie or something, you have hundreds of thousands--if not millions of pixels to raytrace per frame.

The real nightmare of threading is communication and data sharing. That's when you have to start locking your data, etc. etc. Say you have a bit of data storing the results of a computation or something. If you have two threads trying to write to it at the same time, it can result in a race and produce undefined results. You use mutexes, atomics, etc. etc. (again with the etc.) to work around this.

To reiterate, the main problem is that most consumer applications and stuff simply can't make use of 8 cores. Even using 4 can be a challenge for some applications. Then you have the opposite extreme, where render engines eat up 101% of your CPU because it can max out all your threads.

Going back to your original question, there are a lot of threading libraries. Most OS's have threading libraries built-in to their native API's. I'd recommend using a higher-level library if you can. They generally help you think your problems out better. The number one rule I follow is just to try and keep threads from ever "seeing" each other. Otherwise, you are going to have to pick up a book on intermediate to advanced threading and figure out atomics, mutexes, cross-thread communication, etc.

avatar

AFDozerman

Thanks. It's good to get a word or two from a pro. All of the coding I've ever done was either JS or Boo in Unity, and the thought of having to write my own standalone code was a bit scary to me. Within the last two months, though, I have been concentrating on learning C++ and want to start from the ground up with threading and the such in mind, since it's apparently "the future".

avatar

H1N1theI

My suggestion to that:

Don't learn C++ with threading in mind.

If you need threading, learn haskell, which does pretty cool things with monads.

avatar

QuantumCD

I would have to agree with H1N1theI here AFDozerman. Haskell has built-in support for concurrency and parallelism. If you are going to learn C++, you should focus on other things (such as pointers, etc.). Threading in C++ isn't really a language feature, so much as it is a library/OS API. Also, threading really isn't the "future". As I said, not everything can or needs to be threaded. In a lot of cases, low-level threading with locks/etc. is not necessary. Most of the time, you'll just want to have a completely separate thread run some heavy computations or something without ever touching shared data sources, etc. Haskell is great if you are up to learning functional programming. Coming from JS and Boo (Python) functional programming might be a bit odd.

Kind of a tangent here, but are you focused on threading because Unity is not thread-safe, and you need threading for something in a game? Just a note about that here, I have done computation/file IO in separate threads using Mono in Unity. I haven't had problems with Unity data types throwing errors if you operate on them from another thread (Vector3's, Quaternions, etc.) but make sure you copy them and move them to a new thread! If you don't explicitly copy them (C# uses pointers by default through assignment, I believe), i.e. create a new object with the new keyword and copy, it will try to access the actual object/value on the main thread from your thread. You also can't access GameObjects and use methods that access data in the main thread. For instance, FindObjectsOfType() and similar methods cannot be called from a thread, and that makes sense if their API isn't thread-safe. Unity's methods that you call are mostly implemented in the CLR (internal method calls), and that's probably a factor that plays in with their API not being thread-safe. Their C++ backend or whatever they use probably doesn't play nicely with .NET threads.

Also unrelated, but Unity is multi-threaded, as they do split up rendering and scripting objects. So even if your scripting thread is under heavy load, I'm pretty sure the game will continue to render. If you check the stats window in Unity, you can see how many milliseconds the main thread (scripts) takes to execute. About 15ms is 60FPS. Any lower is better, and higher than 15ms will result in framerates below 60FPS. And this is Maximum PC, we don't like to see framerates below 60 ;)

avatar

AFDozerman

I took a look at Haskell and I like what I saw, although I think it may take awhile to wrap my head around. I think I'm gonna hold off on learning multithreading until I develop a better understanding of C++ and then start on Haskell.

As far as Unity development, this is completely unrelated. None of this that I do is professional, and I am just "trying to find my place", experimenting around with different skillssets just to try my hand and see what I really like doing.

avatar

AFDozerman

I took a look at Haskell and I like what I saw, although I think it may take awhile to wrap my head around. I think I'm gonna hold off on learning multithreading until I develop a better understanding of C++ and then start on Haskell.

As far as Unity development, this is completely unrelated. None of this that I do is professional, and I am just "trying to find my place", experimenting around with different skillssets just to try my hand and see what I really like doing.

avatar

QuantumCD

I'm not a professional Unity developer either, but I find game development in Unity a lot easier than working with OpenGL and C/C++ :)

avatar

AFDozerman

I can't even begin to imagine.

avatar

vrmlbasic

For those of us who don't have encyclopedic knowledge of Intel's chip nomenclature could you, in future benchmark articles, kindly include a tag next to each entry on the benchmark charts that indicates the architecture of that chip? For this article I have to keep cross-referencing to find out which chip uses which architecture.

I enjoyed the article (and I do read the text below bar charts lol). I like that RE6 has become an official benchmark as it is a game that I (I guess admitting this isn't too shameful, right) have in my Steam library, allowing me to match up my setup with the latest-'n-greatest in one more test. Thanks.

I am somewhat saddened that there are CPU benchmarks out there that don't truly test a CPU to its fullest. Shogun dev oversight :(

avatar

hornfire3

Gordon,

in your benchmark section you pitted the 4960X against the 3930K and a couple other CPU. where is the benchmark test between the 4960X, the 3970X, and the 3960X? just curious...

avatar

hornfire3

whoops never mind. I forgot to read the last page. Ignore this comment then.

avatar

Baer

Excellent write up.
I had already mostly decided on the 4930. I do work with Sony Vegas and I do a lot of data base work. I also run surround across three monitors in 5760 X 1200 and I have found that I really like surround for work as well as gaming.
As for Mobo, I will OC but not to any extreme however I have chosen to go to a Rampage IV (I am still using a Rampage II with an i7 920 so I will see a big increase in capability). I also am a sound freak and I can tell the difference so I will be using a discrete sound card as well as a pair of GTX 780's for surround.
This is great Info and I read nothing that says that my decision is not the right one for me.
Again, excellent write up Gordon. Keep em coming.

avatar

noeldillabough

I wonder what's the best motherboard for this chip? I'll be using a discrete raid controller and sound card so features can be minimal, I just want the new core by core over clock, lots of ram and of course legendary stability!

(Not asking for much am I?)