Humm... Seems like your reply posts just came through now...
I was looking at the OCZ Vertex SSD 120GB and it was only the large number of complaints that got me thinking of the Crucial M4. (fear is not a good reason for doing anything, imho.)
OCZ SSD's are great values when you get them at a decent price and they always on sale. The Vertex 3's are nice for the price, very speedy and they use the more expensive sync NAND's. Then there's the Agility 3's that use the cheaper async NAND's, but still a good value. They have their differences in performance, but honestly both will work. And yes, take user reviews on newegg or other sites with a grain of salt cause you don't know what version of firmware they are running or what variables are factored in with them bricking their SSD's. Everyone I know who run SSD's from the very first Vertex to the Vertex 3 max IOPS versions never had any issues with them; even then, I don't know anyone personally who actually bricked any SSD's.
But for SSD's, the market is huge and growing; getting cheaper and cheaper so its very exciting to see things will go since Intel basically said that they will focus on reducing the cost per GB while maintaining speed and reliability; rather then just focus on speed.
What about using the PCIe SSD's? Dont those make more sense?
They are extremely fast and not limited to the 6Gb/sec speed (btw, onboard ICH10r on all Intel boards right now limit 6Gb/sec for all SATA drives; this is what the chipset will allocate for the controller chip) since they can run on x4, x8 and x16 speeds, but you are paying the premium for such performance. OCZ makes their Revo branded versions of pcie SSD's; sometimes they get down close to the price points of high end 2.5" SSD's, but they are still costly. These are really only meant more for enterprise/business solutions for servers/databases; actually not meant to be used as storage devices but rather augment a servers system memory once it reaches their onboard controller limit.
Does the lower "ns" number of the Ivy CPU's mean it will be less suited to Highend gaming?
I'm going to assume you mean nm or nanometer? not ns or nanosecond... The spec nm is referring to the size each transistor in the processor. Small means less power, less heat, faster switching (usually) and less travel time for the electrons (depending on architecture too). But the down side with smaller transistors is that they have to deal with the physical limitations of physics and even quantum physics since at this point, we are dealing more on the atomic level.
The biggest trouble with this is power leakage through the gates (as in power leaking through the gate when the transistor is in the off position because the gate is too thin/not insulating enough) and electron tunneling (electronics will randomly jump from one point to another, we still can't figure this out and deals with heavy theories about quantum physics and the relationship among other sub atomic partials-- yes, there's more than 100 other little bits and pieces besides electrons, neutrons and protons that you learned in school; sometimes they do what they want). So it takes research and money to figure this kind of stuff out with those people who are smart... like Einstein smart.
Once they figure out leakage and limit electron tunneling issues (or even bit corruption), generally chips based on smaller transistor sizes are better and faster; have more tolerance for heat so in turn, better at overclocking.
Now with Ivy Bridge, not only did they shrink the die size from 32nm to 22nm (which is no small achievement), they will also be using the new technology called 3d transistors using a tri-gate. All you need to know that its suppose to be much more efficient, faster, require less power and be better at stable overclocks.
I am surprised that you suggest the i5. While good for gaming atm, I figured that the i7 would keep my PC current longer and help with win7 and other apps down the road
I only suggest the mid ranged i5 because its just a quad core. The i7 variants are nothing more than i5's with hyper threading for better efficiency in multi-threaded apps; of which most games can't really take advantage of. Even now, they barely tap into modern dual core or quad cores like I said above. It's just that in the last 3 years, games went from being CPU limited and back to being GPU limited. It could change in another 3 years back to being CPU limited if they design the game engines to take full advantage of the CPU better. It just depends on what work they are trying to do and how they code it; very complicated stuff.
I find game designers love to make their platforms as system heavy as they can get away with (instead of just trying to create a good game), so I figure that i5 will become outdated before the i7.
Game designers are the not the same as game programmers. Programmers are the ones that build the foundation of the graphics engine or the capabilities of the program. Even then, they have to work well with engineers to make sure they use/code their stuff to take advantage of the cpu/gpu's abilities with their extensions and API's within the programming language that they are bound to.
Game designers are then told by the programmers of what the system can and can't handle, then go from there; seeing what they can get away with while trying to make it as easy on themselves to design (that's where programmable coding/shaders come into play). The problem is actually scaling; programming workloads across 4 or more cores is a daunting task as you have to make the system "smart" to figure out your scheduling and balance your processing needs.
This was actually the biggest reason why the PS3 was so damn hard to deal with, because Sony's SDK software was very hard to learn and the programmer/game designers had to balance the cpu load basically by hand, leading to longer development times for games. Even now, they barely tap the power of the PS3 cell based cpu (which actually is an octo-core with 1 core disabled.)
Will PCIe 3 not be used for other things?
It probably will be used on some other pcie devices, but realistically, its meant for graphics. The only other device that I know that actually would really need that kind of bandwidth would be RAID/SAS controller boards for servers than handle 60+ disk arrays or specialized SSD's that cost 10k+. Even then, those types of applications would most likely use non-standard hardware.
I dont know why, but I have developed an emotional attachment to the ASUS Sabertooth P67 MoBo. One hopes the new Z77's will not start off all buggy.
Its' just a brand name... There are better mobo's on the market and the sabertooth is considered deactived/end of its product life cycle by Asus.
If you can wait, I'd still recommend it, but if you really want to game now, you still could easily make an awesome gaming rig on a $2500 budget. You could look at building on the lga 2011 chassis and go dual card 7950/7970; prices for SB-E won't change much so it really doesn't matter if you build that now or in June (cept for what the new GTX 680's will be like in terms of benchmarks, even then, two 7900 cards will still perform very good for the next few years to come).
i7 3930k build idea:
The biggest issue with using PC partspicker for Canada is that newegg's outragous shipping prices aren't included, nor is tax. But something you can look at for $2500ish and step down if you need to. Probably go with a more manageable upcoming AMD 7850/70 for 250-350 each or GTX 660 for 320 each... provided we see some comparable benchmarks. Right now the midrange gpu's that I'd use for this kind of build are just out of stock and/or hit a bad price/performance ratio as everyone is waiting for the new stuff.
And honestly, If you saved some money on the gpu's and had some cash left over, I'd upgrade the cpu cooler to something like the H80 or H100 all-in-one liquid coolers, but a cheap CM 212+ or EVO would work... cept that I can't seem to find a supplier in Canada that sells them for anywhere near the price we get it here and screw paying close to $60 for the EVO... since you need to buy the bracket ($5 + 10 shipping... yes that makes sense...)