Nvidia contnues to feel the pressure from a suddenly competitive ATI and will once again tweak one of its mainstream videocards. Back in June, Nvidia took its 9800GTX card based on the immensely popular G92 core and shrank the core from 65nm to 55nm, pushed the core, memory, and shader clockspeeds, and dubbed the resulting product the 9800GTX+. This time around its the GTX 260 that will undergo a revision.
Citing an un-named source, Expreview reports Nvidia will add another Texture Processing Cluster (TPC) to its GTX 260, bringing the total up from 8 to 9. By doing so, the revised card will sport 216 shader processors instead of the 192 found in the original GTX 260. As far as Expreview knows, core, shader, and memory clockspeeds will remain the same.
If the report holds true, look for the updated card to arrive in September.
Nvidia’s secret war with Intel has evolved into a full scale arms race for the atomic bomb of graphics technology, ray tracing. Using its forum at SIGGRAPH, Nvidia was able to demonstrate an interactive ray tracing simulation using four of the company's next-generation Quadro GPUs. They were set in a Quadro Plex 2100 D4 Visual Computing System with an estimated street price of around $11,000. Not exactly your standard gaming rig, but it gets the point across. Either way, it appears as though Nvidia is finally taking a cue from Intel and is focusing at least some of its effort on developing hardware capable of making this technique a reality for everyday users. The demonstration featured linear scaling of an anti aliased Bugatti Veyron with over two-million polygons. It was run at a resolution of 1920x1080 (1080p) and chugged along at an impressive 30 FPS. The demonstration also featured image-based lighting paint shaders, reflections / refractions, and ray traced shadows. Industry insiders noted that the demo was an impressive undertaking since it was one of the first interactive demonstrations done using a GPU. Intel has demonstrated ray tracing using Quake 3 but was done using CPU power.Larrabee will be Intel’s counter in the consumer market, but it remains to be seen if the CPU style design will be as capable of pushing out polygons as Nvidia’s offerings.Gamers are no doubt hoping the new race to master ray tracing will accelerate its development, but I have a feeling we will be playing Duke Nukem Forever long before we see consumer based ray tracing solutions from either company. Though the important first steps are now well underway.
It's been a rough ride for Nvidia as of late, who not only has had to contend with a suddenly competitive ATI, but also finds itself battling a bad batch of mobile GPUs (which might turn out to be a bigger problem than initially stated). The struggles have turned financial with the graphics chip maker reporting a net loss of $120.9 million in the second quarter, or 22 cents a share. This is in stark contrast to one year ago when the company posted a profit of $172.7 million, or 29 cents a share.
The quarter's results include a $196 million charge Nvidia took to cover warranty, repairs, and other costs associated with an "abnormal failure rate" among its mobile GPUs. Nvidia executives are hopeful for a somewhat better third quarter, saying they expect revenue to grow "slightly."
"We didn't lose any share, the market just got soft on us," said chief executive Jen-Hsun Huang. And while Huang admitted that the second quarter results are "disappointing," the company still saw its shares rise by 10 percent after announcing a $1 billion boost to its stock buyback program.
Overclock.net forum member nitteoclaims to have built a Folding@Home farm with no less than 51 GPUs, and he has the pics to prove it. In them are a mixture of 8800GT and 8800GS videocards spread out across a variety of MSI and Gigabyte motherboards. Final numbers are still be tallied, but nitteoestimates he'll pull in over 250,000 points per day on his new setup, and things only look to get better with the CUDA-based folding client.
That's all well and good for Overclock.net (and the Folding community in general), but that also means Team Maximum PC has to keep it kicked up into high gear. Maximum PC currently holds the 4th spot in team rankings and could use your help. If you want to Fold for your favorite magazine, add team 11108 to your client's profile, and drop by the forum for tips on how to optimize your production.
No one has been more critical of Nvidia then rumor and news outlet The Inquirer, who recently declared that all of the chipmaker's G84 and G86 parts are bad. The extent of the problem is still to be determined, but here's what's known so far.
A batch of bad GPUs have found their way into the wild causing an "abnormal failure rate" among certain laptop models
To deal with the problem, Nvidia said it was setting aside a one-time hit of $150 to $200 million to cover warranty and repair costs associated with the faulty mobile parts
Both HP and Dell have released a list of notebook models potentially affected by the faulty GPUs and are encouraging owners to update their BIOS as a preventive measure (the newer BIOS kicks on the cooling fan earlier than it normally would). HP has also extended their warranty for the affected models.
Nvidia has since moved on to its 9-M series GPUs, and in the process has presumably solved whatever problem affected the previous generation parts, right? Not so fast, says the The Inq. According to the rumor site, the fundamental flaw in the manufacturing process still exists, and now G92 and G94 parts are reportedly failing. The Inq claims that no less than four partners are already seeing the new chips go bad at high rates, and believes that Nvidia "is simply stonewalling everyone" about the alleged problem.
If true, another batch of parts could be disastrous for the chip maker, who continues to lose graphics market share to Intel and has seen its stock price plummet in the wake of a disappoint 8-K filing.
Is the problem bigger than Nvidia's letting on, or will it be this latest rumor that ultimately turns out to be the dud?
Not without their share of pre-release hype, AMD's 4870 X2 videocards lived up to every bit of it by obliterating the competition in this year's Dream Machine (a single 4870 X2 churned out twice as many frames as Nvidia's GTX280 in 3DMark Vantage). And they did it months before they were supposed to go public, which means there were architectural tweaks yet to be made.
The wait is over, and at long last, AMD has finally announced what it rightfully calls the world's fastest graphics card, the ATI Radeon 4870 X2. Built on a 55nm manufacturing process, the dual-GPU videocard comes with the computational muscle to deliver 2.4 teraFLOPS, and ATI can still lay claim as the only manufacturer to support DirectX 10.1 instructions. Rounding out the feature-set, the 4870 X2 ships with 2GB GDDR5, 1600 stream processors, and a 750MHz core clockspeed (reference). MSRP has been set to $549 with stock available now.
AMD also made mention of it's upcoming 4850 X2 videocard. As the name implies, this card will also be a dual-GPU solution (clocked at 625MHz), and like it's bigger brother it will come with 1600 stream processors. Instead of GDDR5, the 4850 X2 will ship with 2GB of GDDR3. Look for availability this September with an estimated sub-$400 street price.
As Intel gears up to sample Larrabee later this year, the chip maker continues to build hype over the architecture's x86 roots. Intel is quick to point out that developers will be able to program in C or C++ languages just as they're used to doing on x86 processors, giving them an easy way to port applications from other platforms over to Larrabee.
Meanwhile, Nvidia also wants to build hype, but over its competing CUDA architecture. DailyTech has posted Nvidia's comments on the issue, which read:
CUDA is a C-language compiler that is based on the PathScale C compiler. This open source compiler was originally developed for the x86 architecture. The NVIDIA computing architecture was specifically designed to support the C language - like any other processor architecture. Competitive comments that the GPU is only partially programmable are incorrect - all the processors in the NVIDIA GPU are programmable in the C language.
NVIDIA's approach to parallel computing has already proven to scale from 8 to 240 GPU cores. Also, NVIDIA is just about to release a multi-core CPU version of the CUDA compiler. This allows the developer to write an application once and run across multiple platforms. Larrabee's development environment is proprietary to Intel and, at least disclosed in marketing materials to date, is different than a multi-core CPU software environment.
Andrew Humber from Nvidia also went on to clarify that CUDA is a brand name for the C-compiler rather than being two different things.
Anyone else feel chilly when Nvidia and Intel are in the same room?
Nvidia has licensed Transmeta’s power conserving technology for a sum of $25 million. The technologies that Transmeta has leased out to Nvidia include its flagship power management technologies, Longrun and Longrun 2. Transmeta has quickly mastered its current business model of licensing IP to bigger companies and its coffers are loaded with cash.
It shouldn’t surprise anyone that Nvidia has licensed Transmeta’s power management technology as most chip manufacturers are concentrating on increasing power efficiency.
It doesn't matter if you seek solace in Creationism or prescribe to the theory of evolution, everyone should be equally stoked about what Nvidia's calling "Big Bang II." No, the graphics chip maker isn't gearing up to end the debate on man's existence, but even better, the company will improve man's quality of life with a new driver package that looks poised to earn its codename by bringing gamers at least one big, long overdue improvement.
Bang Part I
The biggest news associated with Nvidia's ForceWare Release 180 (R180) is the introduction of SLI multi-monitor support. Ever since Nvidia introduced SLI, the inability to run a second monitor while gaming has been a major complaint, and even more so as LCD displays have fallen in price. That finally looks to no longer be the case with the new driver release, and gamers will be able to frag opponents while simultaneously keeping an eye on their email inbox, incoming IMs, and everything else that would previously be blacked out on a second monitor.
Find out what else is bangin' with the new driver after the jump.
AMD's acquisition of graphics chip maker ATI continues to be a sour point whenever the company talks about its finances, most recently coming up when AMD said it would take a near billion dollar charge in the second quarter. Given AMD's financial status, it's easy to criticize the company's decision to overpay for a company that has yet to benefit impatient investors. That could change if AMD's Fusion ends up revolutionizing the PC landscape.
Up to this point, AMD hasn't gone into specifics regarding its upcoming CPU+GPU chip, but according to TGDaily, industry sources aren't being as tight-lipped. If the rumblings are to be believed, the first Fusion processor (code-named Shrike) will consist of a dual-core Phenom CPU and an ATI RV800 GPU core, Previous rumors had the first run Fusion chips built around a dual-core Kuma CPU and RV710 graphics chip, but those plans appear to have gone by the wayside as AMD has had more time to develop a low-end RV800-based core.
The sources also indicate that Fusion will likely be introduced as a half-node chip built around a 40nm manufacturing process, and will later move to 32nm, possibly by the beginning of 2010.