Let's set aside Crysis, heavy encoding, and the few other specialized tasks capable of making a high end rig writher in agony. For everything else, we're at a point where the software needs to catch up with the hardware, and that hasn't always been the case. Remember when your anti-virus program would kick in, preventing you from being able to open a Word document or perform other mundane tasks with any sense of urgency? Neither Intel's brute-force, gazillion stage pipeline nor AMD's Rainman approach to efficiency were enough to get over the performance hump, and it took the advent of mulitple core processors to blow the doors open to multitasking.
Now that dual- and quad-core processors are mainstream parts, the roles have been reversed. There exists only a handful of programs developed to intelligently utilize additional cores, and even less that take advantage of the additional computing power effectively. Toss benchmarking by the wayside and you probably won't be able to discern between a dual-core E8200 (2.66GHz) system and one equipped with a quad-core Q9450 (2.66GHz). For that to change, developers must learn how to program for not only today's hardware. but tomorrow's too.
Lest there be any doubt what the future will bring, Intel is warning developers that they need to " start thinking about tens, hundreds, and thousands of cores now in their algorithmic development and deployment pipeline ." In other words, multiple cores are the wave of the future and the company has no plans of changing direction after Dunnington , its upcoming 6-core processor.
Image Credit: TechReport
According to Intel, developers must start writing code capable of scaling performance beyond existing core counts, so that an application that runs well on a quad-core processor will run even better on 8, 10, or even 100 cores. Like the GHz wars of yesteryear, more cores should equate to better performance. Ray tracing , a term you'll undoubtedly become more familiar with as 3D gaming continues to evolve, already scales well to additional cores, but most applications can't make the same claim, and that poses a problem.
Despite the lack of future-proof software, this isn't a case of developers being lazy . Multiple-core processors are in their infancy, and single-threaded coding has existed for so long, that both the knowledge and development tools are in need of an overhaul . Asking developers to unlearn, relearn, and create is unwelcome advice indeed, but also necessary. Otherwise, it won't matter how many cores Intel and AMD are able to squeeze under a single die, we'll still be limited by the software.
Will the software ever catch up with the hardware? Be sure and post your thoughts below.