Sport & Auto
- About Future
- Digital Future
- Cookies Policy
- Terms & Conditions
- Investor Relations
- Contact Future
For the most part, cheating on the PC has been mostly tamed by the move from the early days of purely synthetic benchmarks to an emphasis on “real-world” tests. The move was motivated when the benchmarking community began seeing driver optimizations that increased performance in benchmarks that actually hurt gaming performance. The theory behind the emphasis on real-world testing is that if a vendor is “optimizing” for a game, the end user still benefits. So call it cheating or optimizing, the result is still a better experience for the consumer. At least, that’s the theory. Reality doesn’t always match though.
More than a decade ago, ATI got in hot water for fudging performance numbers in Quake III Arena
One of the most famous cases of “optimizations” involved Quake III Arena. Tech site HardOCP.com found that changing the name of the executable from Quake3.exe to Quack3.exe would cause performance of the ATI Radeon 8500 to drop. When changed back, the performance would increase. Further testing by others found that the “optimization” appeared to be at the cost of image quality.
ATI defended itself by saying that it was indeed an optimization made to give gamers the best combination of performance and visual quality but the fact that people still remember this more than 12 years later tells you how history remembers it.
Years ago, Intel was also caught up in another benchmark brouhaha when it was found that applications compiled with Intel’s compiler didn’t use Streaming SIMD Extensions 2 (SSE2) properly on AMD CPUs that had the feature. The only way to enable the support on AMD CPUs was to make the application appear to be an Intel CPU that supported SSE2. The end result was even if an AMD CPU had SSE2 support, an application compiled with Intel’s compiler would run far slower using a different code path without SSE2 support. This, in fact, was an allegation of AMD’s anti-trust suit against Intel which both eventually agreed to settle with AMD receiving a $1.25 billion payment.
But showing just how gray “optimizations” can be, defenders of Intel argue that the Intel’s C++ compiler is specifically designed for Intel CPUs and it’s not Intel’s job to validate AMD CPUs with a tool made to extract the most performance out of an Intel CPU. Others, of course, argue that Intel’s foot print on the industry is so large and if its compiler was violating Intel’s own guidance to explicitly check for CPU feature set support rather than just the CPUID string the only answer can be blatant cheating.
On the PC though, these incidents are more the exception than the rule thanks to the bad PR that’s usually generated and a generally skeptical press. Reliance on using real-applications, such as how long it takes to encode a video using Handbrake, has also kept the benchmark controversies to a minimum lately.
That’s not the same with tablets and smartphones right now. Samsung’s name was recently dragged through over allegations that the Galaxy S4 and Galaxy Note 3 were maxing out on cores and clock speeds—but only during popular benchmarks. This practice though apparently wasn’t confined to Samsung, Anandtech.com found multiple vendors were targeting benchmarks including Asus, HTC, LG as well as Samsung.
As the first fingered for optimizing solely for benchmarks, Samsung has denied it’s intentionally trying to cheat, but only wants to give the highest performance when running stressful workloads. Afterall, when you’re running a test that’s supposed to measure an SOC’s theoretical performance, don’t you want the SOC to be running at maximum clock speed with all of the cores active?
In the end, Intel argues that synthetic mobile benchmarks are still misleading.
“It’s not because you have 25 potatoes that you have a good cell phone,” Piednoel said. “At the end of the day we are asking you to look at the new breed of benchmarks coming from benchmark vendors. Try to measure user experience, stop trying to measure potatoes.”