- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
This test is somewhat analogous to the GPU comparison, as most people would assume that two small-capacity SSDs in RAID 0 would be able to outperform a single 256GB SSD. The little SSDs have a performance penalty out of the gate, though, as SSD performance usually improves as capacity increases because the controller is able to grab more data given the higher-capacity NAND wafers—just like higher-density platters increase hard drive performance. This is not a universal truth, however, and whether or not performance scales with an SSD’s capacity depends on the drive’s firmware, NAND flash, and other factors, but in general, it’s true that the higher the capacity of a drive, the better its performance. The question then is: Is the performance advantage of the single large drive enough to outpace two little drives in RAID 0?
Before we jump into the numbers, we have to say a few things about SSD RAID. The first is that with the advent of SSDs, RAID setups are not quite as common as they were in the HDD days, at least when it comes to what we’re seeing from boutique system builders. The main reason is that it’s really not that necessary since a stand-alone SSD is already extremely fast. Adding more speed to an already-fast equation isn’t a big priority for a lot of home users (this is not necessarily our audience, mind you). Even more importantly, the biggest single issue with SSD RAID is that the operating system is unable to pass the Trim command to the RAID controller in most configurations (Intel 7 and 8 series chipsets excluded), so the OS can’t tell the drive how to keep itself optimized, which can degrade performance of the array in the long run, making the entire operation pointless. Now, it’s true that the drive’s controller will perform “routine garbage collection,” but how that differs from Trim is uncertain, and whether it’s able to manage the drive equally well is also unknown. However, the lack of Trim support on RAID 0 is a scary thing for a lot of people, so it’s one of the reasons SSD RAID often gets avoided. Personally, we’ve never seen it cause any problems, so we are fine with it. We even ran it in our Dream Machine 2013, and it rocked the Labizzle. So, even though people will say SSD RAID is bad because there’s no Trim support, we’ve never been able to verify exactly what that “bad” means long-term.
It’s David and Goliath all over again, as two puny SSDs take on a bigger, badder drive.
The Test: We plugged in two Corsair Neutron SSDs, set the SATA controller to RAID, created our array with a 64K stripe size, and then ran all of our tests off an Intel 520 SSD boot drive. We used the same protocol for the single drive.
The Results: The results of this test show a pretty clear advantage for the RAIDed SSDs, as they were faster in seven out of nine tests. That’s not surprising, however, as RAID 0 has always been able to benchmark well. That said, the single 256 Corsair Neutron drive came damned close to the RAID in several tests, including CrystalDiskMark, ATTO at four queue depth, and AS SSD. It’s not completely an open-and-shut case, though, because the RAID scored poorly in the PC Mark Vantage “real-world” benchmark, with just one-third of the score of the single drive. That’s cause for concern, but with these scripted tests it can be tough to tell exactly where things went wrong, since they just run and then spit out a score. Also, the big advantage of RAID is that it boosts sequential-read and -write speeds since you have two drives working in parallel (conversely, you typically won’t see a big boost for the small random writes made by the OS). Yet the SSDs in RAID were actually slower than the single SSD in our Sony Vegas “real-world” 20GB file encode test, which is where they should have had a sizable advantage. For now, we’ll say this much: The RAID numbers look good, but more “real-world” investigation is required before we can tell you one is better than the other.
|1x Corsair Neutron 256GB||2x Corsair Neutron 128GB RAID 0|
|Avg. Sustained Read (MB/s)||512||593|
|Avg. Sustained Write (MB/s)
|AS SSD - Compressed Data|
|Avg. Sustained Read (MB/s)||506||647|
|Avg. Sustained Write (MB/s)||318||368|
|64KB File Read (MB/s, 4QD)||436||934|
|64KB File Write (MB/s, 4QD)||516||501|
|4KB Random Write 32QD
|PCMark Vantage x64
|Sony Vegas Pro 9 Write (sec)||343||429|
Best scores are bolded. All tests conducted on our hard-drive test bench, which consists of a Gigabyte Z77X-UP4 motherboard, Intel Core i5-3470 3.2GHz CPU, 8GB of RAM, Intel 520 Series SSD, and a Cooler Master 450W power supply.
There’s a tendency for testers to dismiss “synthetic” benchmarks as having no value whatsoever, but that attitude is misplaced. Synthetics got their bad name in the 1990s, when they were the only game in town for testing hardware. Hardware makers soon started to optimize for them, and on occasion, those actions would actually hurt performance in real games and applications.
The 1990s are long behind us, though, and benchmarks and the benchmarking community have matured to the point that synthetics can offer very useful metrics when measuring the performance of a single component or system. At the same time, real-world benchmarks aren’t untouchable. If a developer receives funding or engineering support from a hardware maker to optimize a game or app, does that really make it neutral? There is the argument that it doesn’t matter because if there’s “cheating” to improve performance, that only benefits the users. Except that it only benefits those using a certain piece of hardware.
In the end, it’s probably more important to understand the nuances of each benchmark and how to apply them when testing hardware. SiSoft Sandra, for example, is a popular synthetic benchmark with a slew of tests for various components. We use it for memory bandwidth testing, for which it is invaluable—as long as the results are put in the right context. A doubling of main system memory bandwidth, for example, doesn’t mean you get a doubling of performance in games and apps. Of course, the same caveats apply to real-world benchmarks, too.
Even seasoned veterans are tripped up by benchmarking pitfalls, so beginners should be especially wary of making mistakes. Here are a few tips to help you on your own testing journey.
Put away your jump-to-conclusions mat. If you set condition A and see a massive boost—or no difference at all when you were expecting one—don’t immediately attribute it to the hardware. Quite often, it’s the tester introducing errors into the test conditions that causes the result. Double-check your settings and re-run your tests and then look for feedback from others who have tested similar hardware to use as sanity-check numbers.
When trying to compare one platform with another (certainly not ideal)—say, a GPU in system A against a GPU in system B—be especially wary of the differences that can result simply from using two different PCs, and try to make them as similar as possible. From drivers to BIOS to CPU and heatsink—everything should match. You may even want to put the same GPU in both systems to make sure the results are consistent.
Use the right benchmark for the hardware. Running Cinebench 11.5—a CPU-centric test—to review memory, for example, would be odd. A better fit would be applications that are more memory-bandwidth sensitive, such as encoding, compression, synthetic RAM tests, or gaming.
Be honest. Sometimes, when you shell out for new hardware, you want it to be faster because no one wants to pay through the nose to see no difference. Make sure your own feelings toward the hardware aren’t coloring the results.