Not without their share of pre-release hype, AMD's 4870 X2 videocards lived up to every bit of it by obliterating the competition in this year's Dream Machine (a single 4870 X2 churned out twice as many frames as Nvidia's GTX280 in 3DMark Vantage). And they did it months before they were supposed to go public, which means there were architectural tweaks yet to be made.
The wait is over, and at long last, AMD has finally announced what it rightfully calls the world's fastest graphics card, the ATI Radeon 4870 X2. Built on a 55nm manufacturing process, the dual-GPU videocard comes with the computational muscle to deliver 2.4 teraFLOPS, and ATI can still lay claim as the only manufacturer to support DirectX 10.1 instructions. Rounding out the feature-set, the 4870 X2 ships with 2GB GDDR5, 1600 stream processors, and a 750MHz core clockspeed (reference). MSRP has been set to $549 with stock available now.
AMD also made mention of it's upcoming 4850 X2 videocard. As the name implies, this card will also be a dual-GPU solution (clocked at 625MHz), and like it's bigger brother it will come with 1600 stream processors. Instead of GDDR5, the 4850 X2 will ship with 2GB of GDDR3. Look for availability this September with an estimated sub-$400 street price.
As Intel gears up to sample Larrabee later this year, the chip maker continues to build hype over the architecture's x86 roots. Intel is quick to point out that developers will be able to program in C or C++ languages just as they're used to doing on x86 processors, giving them an easy way to port applications from other platforms over to Larrabee.
Meanwhile, Nvidia also wants to build hype, but over its competing CUDA architecture. DailyTech has posted Nvidia's comments on the issue, which read:
CUDA is a C-language compiler that is based on the PathScale C compiler. This open source compiler was originally developed for the x86 architecture. The NVIDIA computing architecture was specifically designed to support the C language - like any other processor architecture. Competitive comments that the GPU is only partially programmable are incorrect - all the processors in the NVIDIA GPU are programmable in the C language.
NVIDIA's approach to parallel computing has already proven to scale from 8 to 240 GPU cores. Also, NVIDIA is just about to release a multi-core CPU version of the CUDA compiler. This allows the developer to write an application once and run across multiple platforms. Larrabee's development environment is proprietary to Intel and, at least disclosed in marketing materials to date, is different than a multi-core CPU software environment.
Andrew Humber from Nvidia also went on to clarify that CUDA is a brand name for the C-compiler rather than being two different things.
Anyone else feel chilly when Nvidia and Intel are in the same room?
Nvidia has licensed Transmeta’s power conserving technology for a sum of $25 million. The technologies that Transmeta has leased out to Nvidia include its flagship power management technologies, Longrun and Longrun 2. Transmeta has quickly mastered its current business model of licensing IP to bigger companies and its coffers are loaded with cash.
It shouldn’t surprise anyone that Nvidia has licensed Transmeta’s power management technology as most chip manufacturers are concentrating on increasing power efficiency.
It doesn't matter if you seek solace in Creationism or prescribe to the theory of evolution, everyone should be equally stoked about what Nvidia's calling "Big Bang II." No, the graphics chip maker isn't gearing up to end the debate on man's existence, but even better, the company will improve man's quality of life with a new driver package that looks poised to earn its codename by bringing gamers at least one big, long overdue improvement.
Bang Part I
The biggest news associated with Nvidia's ForceWare Release 180 (R180) is the introduction of SLI multi-monitor support. Ever since Nvidia introduced SLI, the inability to run a second monitor while gaming has been a major complaint, and even more so as LCD displays have fallen in price. That finally looks to no longer be the case with the new driver release, and gamers will be able to frag opponents while simultaneously keeping an eye on their email inbox, incoming IMs, and everything else that would previously be blacked out on a second monitor.
Find out what else is bangin' with the new driver after the jump.
AMD's acquisition of graphics chip maker ATI continues to be a sour point whenever the company talks about its finances, most recently coming up when AMD said it would take a near billion dollar charge in the second quarter. Given AMD's financial status, it's easy to criticize the company's decision to overpay for a company that has yet to benefit impatient investors. That could change if AMD's Fusion ends up revolutionizing the PC landscape.
Up to this point, AMD hasn't gone into specifics regarding its upcoming CPU+GPU chip, but according to TGDaily, industry sources aren't being as tight-lipped. If the rumblings are to be believed, the first Fusion processor (code-named Shrike) will consist of a dual-core Phenom CPU and an ATI RV800 GPU core, Previous rumors had the first run Fusion chips built around a dual-core Kuma CPU and RV710 graphics chip, but those plans appear to have gone by the wayside as AMD has had more time to develop a low-end RV800-based core.
The sources also indicate that Fusion will likely be introduced as a half-node chip built around a 40nm manufacturing process, and will later move to 32nm, possibly by the beginning of 2010.
3D graphics technology has grown by leaps and bounds since 3DFX first laid its Voodoo on the computing world, and today's videocards boast everything from multiple GPUs in a single package to the promise of physics processing. And not just for gaming, fanatical Folders can crunch through more proteins by utilizing their GPU, or decode a high definition movie on their new big screen TV.
Leading the charge into this new era of 3D computing are Nvidia and ATI, two companies who have recently started going at each others' throats with aggressive price cuts and a deluge of new videocards while simultaneously chasing the performance crown. But for all their battles, both old and new, it's Intel, CPU maker extraordinaire, who continues to lead the market.
Find out how much catching up Nvidia and ATI have to do after the jump.
The podcast gang returns after an extended break to bring you a the latest and greatest tech news. We also answer a load of your questions, providing advice on such subjects as dual PSUs, water cooling, and system upgrading.
We also follow up our astoundingly popular Win a Dream Date with Dave contest with its sequel, the Win a Dream Date with Norm competition. If you would like to enjoy a delicious chaperoned lunch with online editor Norm and take a tour of the Lab, tell us why you'd like to spend some time with Norm--songs, photo collages, and videos are all welcome. Send your entry to firstname.lastname@example.org by August 11!.
Do you have a tech question? A comment? A tale of technological triumph? Just need to get something off your chest? Email us at email@example.com or call our 24-hour No BS Podcast hotline at 877.404.1337 x1337--operators are standing by.
Intel can not only lay claim as the current king of chip technology, but its upcoming Nehalem microarchitecture looks poised to keep the silicon studs on top of the competition well into 2009. AMD has yet to threaten Intel's position ever since Conroe, and while the company remains confident under Dirk Meyer as the new head honcho, it's still playing catch up to Intel's 45nm technology.
The situation gets a little more competitive when switching from CPUs to GPUs, and according to Tomshardware, sources at both ATI and Nvidia are saying they will each have a 40nm GPU manufacturing process by the first half of 2009, possibly to be unveiled at next year's CeBit.
Assuming either company meets their target, the accomplishment will unseat Intel as the technological leader in terms of the smallest chip structures, even if only for a short time. The road won't stop at Nehalem and Intel is already busy developing 32nm CPUs, which many expect to be shown off in prototype form at the company's spring development forum in H1 2009. Volume shipments could come as early as Q3 next year.
Even so, if 40nm GPUs materialize as reported, it will mark the first time GPUs will overtake CPUS in terms of production nodes. That won't necessarily make it a better chip, but you can expect plenty of fanfare should Nvidia and/or ATI dethrone the silicon king.
An "abnormal failure rate" among Nvidia's 8M series GPUs hasn't stopped the graphics chip maker from moving forward; the Santa Clara company expanded its lineup last week with the GeForce 9800M and 9700M parts for notebooks. If you've been waiting for these parts to reach the market place, you can now whip out the credit card and place your order.
Best Buy already stocks Toshiba's sexy Qosmio X305-Q701, which comes equipped with Nvidia's 9700M GTS GPU sporting 512MB GDDR3 memory. According to DigiTimes, over 20 other notebook vendors are expected to soon follow suit, including other top tier vendors. Gaming notebooks have also been announced from the likes of Sager, CyberSystem, Infinity, Pioneer Computers Australia, and other regional PC companies.
If Toshiba's Qusmio isn't your style, look forward to a flurry of other 9M based notebooks to make their debut as Intel's Centrino 2 platform emerges full force.
Last month Nvidia said it planned to tweak its 9800GTX videocard with a die shrink and faster clockspeeds resulting in the 9800GTX+, and today the release becomes official with immediate availability. Along with the 9800GTX+, Nvidia fleshes out its GeForce 9-series line with two other videocards, the 9800GT and 9500GT.
All three cards are available now, and each one brings support for Nvidia's PhysX and CUDA technologies, two areas currently exclusive to Nvidia.
"The addition of the new 9800GTX+, 9800GT, and the 9500GT GPUs brings a new level of visual computing capability to additional mainstream market segments," said Ujesh Desai, general manger of desktop GPUs at Nvidia. "Nvidia GPUs deliver the best bang for the buck in each price category, and with support for CUDA, PhysX, and 3D stereoscopic technology, consumers can now experience the unique, innovative, and immersive computing experience that only Nvidia can deliver."
Claiming victory in the bang-for-buck war would have been a tough sell just weeks ago, but such claims become easier to swallow with the 9500GT taking residence in the sub-$70 pricing tier. Both the 9800GT and GTX+ can be bought for under $200, with the latter going head to head against ATI's HD 4850 videocard. For you old schoolers, it hasn't been this fun to shop for a GPU since the TI4200 days.